This is the sequel to the previous post on How To Create Updatable Core ML Models With Core ML Tools
Core ML got a big boost this year with the Core ML 3 update during WWDC 2019. Among the many improvements, we got, On-Device Learning stands out.
We’ve already covered Cat vs Dog Image Classifier using our own Core ML model in a previous article. With iOS 13, Vision is even more powerful. VNImageRequest now has VNRecognizeAnimalRequest to identify cats and dogs in images.
iOS 13 has finally rolled out for the public. In no time, 13.1 was out as well. I’m sure you’ll be shipping your next app updates with it. Before doing that let’s go through a checklist of essential things.
Deep learning is a popular and interesting subset of Machine Learning. Deep learning brings neural networks into the limelight. Many complex tasks just as image classification, speech recognition etc can be achievable with the help of Deep Learning. We’ll be focusing on Image Classification only in this post.
Previously, we had used Vision and Core ML to scan and recognize texts from an image. Now that iOS 13 is here, the Vision API is vastly improved. Besides, VisionKit framework is now introduced which allows us to scan documents using Camera.
Recently I was asked to lock the device orientation to portrait only for an iOS Application. Trusting Xcode blindly, I went to the Project Navigator -> Deployment Info and checked the portrait only mode
Our goal for today is to build an iOS Application that identifies texts in a still image.