• The OCR framework uses the camera and requires filling in the Privacy - Camera Usage Description key and value if edited in Xcode.

Camera Usage Description

If viewing the raw source add the NSCameraUsageDescription key as displayed below.

<string>Get VitalSnap readings</string>


The Validic mobile library OCR feature provides the capability to obtain readings from devices without requiring Bluetooth or HealthKit integration.

OCR Peripherals

VLDOCRPeripheral represents peripheral models that can be processed by OCR.

A peripheral object contains various properties which can be displayed to the user:

  • name - Name of the peripheral comprised of the manufacturer name and model number.
  • imageURL - URL for an image of the peripheral.
  • overlayImage - UIImage of the overlay used to position the peripheral within the camera preview.

To obtain a VLDOCRPeripheral, several class methods are provided to be able to retreive one or more supported peripherals.

// Retreive a specific peripheral
VLDOCRPeripheral *peripheral = [VLDOCRPeripheral peripheralForID:1];
// or by type
NSArray *peripherals = [VLDOCRPeripheral peripheralsOfType: VLDPeripheralTypeGlucoseMeter];
// or retrieve a list of all supported peripherals
NSArray *peripherals = [VLDOCRPeripheral supportedPeripherals];
// Retreive a specific peripheral
let peripheral = VLDOCRPeripheral(forID: 1)
// or by type
let peripherals = VLDOCRPeripheral.peripherals(of: .glucoseMeter)
// or retrieve a list of all supported peripherals
let peripherals = VLDOCRPeripheral.supportedPeripherals()

OCR View controller

The VLDOCRViewController provides a simple interface to optically scan a peripheral and provide the resulting reading. This view controller presents a prepackaged view and is typically displayed modally. It presents a camera view with an overlay appropriate to the peripheral being scanned. Partial results are displayed in the view while recognition is in progress. When it converges on a value, the delegate method, ocrViewController:didCompleteReading:image:metadata: is called with the results. An example application, “OCR example”, is provided, written in Swift which illustrates this OCR API. It is contained within the “Example Apps/Swift OCR” directory in the library download.

The app must have access to the camera. Permission will be requested on first launch. If the library is unable to access the camera, the delegate method, ocrViewControllerWasDeniedCameraAuthorization: is called. In iOS 10 and later, the info.plist must also include a Camera Usage Description as described in Supporting iOS 10.

The presented view provides a button to cancel OCR. If the user cancels, the delegate method, ocrViewControllerDidCancel: is called.

When any of the three delegate methods are invoked, the view controller should be dismissed.

The following example constructs a VLDOCRViewController for the designated peripheral and presents it.

if let controller = VLDOCRViewController(ocrPeripheralID: 3) {
   controller.delegate = self
   self.present(controller, animated: true, completion: nil)

Delegate call backs dismiss the view controller and handle the returned record. The recognized value returned in the delegate call back should be displayed to the user so that they can verify the value and modify it if incorrect. The image that was recognized is also provided and can be displayed to the user to verify the value.

func ocrViewControllerDidCancel(_ viewController: VLDOCRViewController) {
  viewController.dismiss(animated: true, completion: nil);

func ocrViewControllerWasDeniedCameraAuthorization(_ viewController: VLDOCRViewController) {
    print("Unable to access camera")
    viewController.dismiss(animated: true, completion: nil);

func ocrViewController(_ viewController: VLDOCRViewController, didCompleteReading record: VLDRecord?, image: UIImage?, metadata: [AnyHashable : Any]?) {
    // Display the value and image, allow the user to correct
    viewController.dismiss(animated: true, completion: nil);

The record received from the OCR View Controller should be verified by the user and then submitted to the Validic server.

// After verification, queue the record and image to be uploaded to the server
VLDSession.sharedInstance().submitRecord(record, image: image)

Runtime unit selection

For any glucose meter in our lineup of supported meters, you can now specify mmol/l or mg/dL at runtime for a given reading. If no unit is provided, mg/dL is assumed.

An example using the VLDOCRViewController:

ocrViewController = VLDOCRViewController(ocrPeripheralID: 9, glucoseUnit: .MMOLL)

OCR - Custom view (optional)

If more control over the OCR view is needed, a custom view can be implemented to perform OCR using the VLDOCRController class. The simpler VLDOCRViewController is recommended unless customizing the view is required. A example application, “VitalSnap example”, is provided written in Swift which illustrates this OCR API. It is contained within the “Example apps/Swift VitalSnap” directory in the library download.

Process overview

An instance of the VLDOCRController class is used to manage the recognition process. A VLDOCRPeripheral represents the peripheral being scanned. The controller is initialized with a peripheral object.

The recognition process involves a video camera preview layer and an overlay image, both obtained from the controller. The overlay image must be precisely positioned over the preview layer for proper alignment and recognition using a frame provided by the controller.

The app creates a view hierarchy where a parent view contains two subviews (preview and overlay). The controller is configured with the size of the preview layer and it generates the appropriate frame for the overlay image view.

OCR processing begins as soon as the controller is initialized. A delegate is called with intermediate and final results. The camera capture session ends when the controller is deallocated.

View structure

Two different approaches can be used to structure views for the preview and overlay, using either a UIImageView or CALayer for the overlay image.

In the preferred view based approach, a parent view contains two subviews, one for the preview and the other for the overlay image. The preview layer is obtained from the VLDOCRController and added as a sublayer to the preview view. Once this layer is added, no subviews of the preview view will be visible. The overlay view, typically a UIImageView, and sibling of the overlay view, contains the overlay image obtained from the VLDOCRController, positioned to the frame specified by the controller. The origin of both views is assumed to be [0,0] within their parent. The parent and and preview view are typically full screen.

In the alternate approach, a single view is used. The preview layer is added to this view and an additional CALayer is created, the overlay image is set on that CALayer and it is added as another sublayer to the same view. The overlay layer’s frame must be set to the frame given by the VLDOCRController.

Initializing VLDOCRController

The VLDOCRController requires a VLDOCRPeripheral for its initializers.

Once a peripheral is obtained, construct the VLDOCRController and assign its delegate.

// Maintain a reference to the controller
var controller: VLDOCRController?
self.controller = VLDOCRController(ocrPeripheral: peripheral)
// Assign a delegate
self.controller.delegate = self


The camera preview layer is obtained from VLDOCRController and is added as a sublayer to a view within a view hierarchy as described under View Structure.

// Property or IBOutlet
@IBOutlet var previewView: UIView!

// Typically set up in ViewDidLoad
override func viewDidLoad() {
    if let previewLayer: AVCaptureVideoPreviewLayer = controller.previewLayer {

Set the previewLayer’s frame to match those of its containing preview view. Typically within viewDidLayoutSubviews.

override func viewDidLayoutSubviews() {
    self.controller.previewLayer?.frame = self.previewView.bounds

Overlay in a view

An overlay image is displayed over the preview layer using a specific frame calculated by VLDOCRController. The image is obtained from the VLDOCRPeripheral used to initialize the VLDOCRController.

@IBOutlet var overlayView: UIImageView!

override func viewDidLoad() {
    // let peripheral: VLDOCRPeripheral = ...
    overlayView.image = peripheral.overlayImage()

Whenever the preview layer’s frame changes, the controller needs to be informed by invoking configureForPreviewLayerSize:. After this call, the overlayView’s frame should be set using the overlayFrame property of the VLDOCRController.

override func viewDidLayoutSubviews() {
    // Configure VLDOCRController with the current preview layer size
    self.overlayView.frame = self.controller.overlayFrame

Process results

During OCR processing, methods on a delegate conforming to the VLDOCRControllerDelegate protocol are invoked.

ocrController:didProcessResult: is invoked for each camera frame captured and provides intermediate results. The VLDOCRResult object contains the current recognized string, an object describing possible glare, and the cropped image that was associated with this incomplete result. The result string can be displayed to the user as an indication of what portion of the display is not being recognized. Depending on the peripheral, the result string may contain linefeeds representing multiple lines being recognized.

func ocrController(_ ocrController: VLDOCRController, didProcessResult result: VLDOCRResult?) {
    if let result = result {
        NSLog("Partial result \(result.resultString)")
        let resultImage = result.image

The ocrController:didCompleteReading:image:forPeripheral:metadata: delegate method is invoked when OCR processing has completed with reasonably high confidence.

func ocrController(ocrController: VLDOCRController!, didCompleteReading record: VLDRecord!, image: UIImage!, forPeripheral peripheral: VLDOCRPeripheral!, metadata: [NSObject : AnyObject]!) {
    // Obtain fields from the record to display to the user
    if peripheral.type == VLDPeripheralType.glucoseMeter {
        if let diabetesRecord = record as? VLDDiabetes {
            let bloodGlucose: NSNumber? = diabetesRecord.bloodGlucose

            // Obtain the captured image, display to the user for verification of reading
            var verificationImageView: UIImageView
            verificationImageView.image = image

The value received from the OCR controller should be verified by the user and then submitted to the Validic server.

// After verification, queue the record and image to be uploaded to the server
VLDSession.sharedInstance().submitRecord(record, image: image)

The delegate is passed a VLDRecord subclass appropriate for the peripheral, the matching cropped preview image and additional metadata. The recognized values returned in the record should be visually validated by the user. The cropped preview image can be displayed to the user to validate the recognized values before uploading to the server.

When the user approves of the values, the record can be uploaded as described in Managing a Session.

OCR processing lifecycle

OCR processing commences when the VLDOCRController is instantiated. The camera preview session is stopped when the VLDOCRController is deallocated. OCR processing stops when the final result delegate method is invoked or when the controller is deallocated. To restart or to work with a different peripheral, construct a new VLDOCRController.

Alternate: Overlay in a CALayer

Instead of placing the overlay image within a sibling view in a view hierarchy, the overlay image can be displayed in a CALayer added to the same view containing the preview layer. A separate view for the overlay is unnecessary.

var overlayLayer:CALayer?
override func viewDidLoad() {
    self.overlayLayer = CALayer()
    self.overlayLayer.contents = peripheral.overlayImage()?.cgImage

The overlayLayer’s frame needs to be set when the contain view’s frame changes,

override func viewDidLayoutSubviews() {
    self.overlayLayer.frame = self.controller.overlayFrame