Apple introduced plenty of new features in the Vision Framework during WWDC 2019. Besides providing built-in image classification models for identifying pet animals and bringing improvements in face feature tracking, one of the features that stood out was saliency.
We’ve already covered Cat vs Dog Image Classifier using our own Core ML model in a previous article. With iOS 13, Vision is even more powerful. VNImageRequest now has VNRecognizeAnimalRequest to identify cats and dogs in images.
Previously, we had used Vision and Core ML to scan and recognize texts from an image. Now that iOS 13 is here, the Vision API is vastly improved. Besides, VisionKit framework is now introduced which allows us to scan documents using Camera.
Our goal for today is to build an iOS Application that identifies texts in a still image.