Skip to content

Ios app that identifies objects using coreML written in Swift4

License

Notifications You must be signed in to change notification settings

johnnyperdomo/Vision-App

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Vision-App

iOS app that identifies objects using coreML written in Swift4

Preview

Built with

  • Ios 11
  • Xcode 9

Requirements

SquezeNet.mlmodel Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, person, etc.

import AVFoundation //Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions.
import CoreML //handles Machine-learning
import Vision //handles Face-Object recognition

Features:

  • Access Camera
    var captureSession: AVCaptureSession!
    ...
    captureSession = AVCaptureSession()
  • Capture Photo and Save it in small preview
    var cameraOutput: AVCapturePhotoOutput!
    ...
    cameraOutput.capturePhoto
  • Control camera Flash
    let flashControl = AVCapturePhotoSettings() 
    flashControl.flashMode = .on
  • Tries to Identify Object we take a Picture of
    (request: VNRequest)
    results = request.results as? [VNClassificationObservation] //"VNClassificationObservation" observes what was in the image
  • Shows accuracy of Object identification
    classification.confidence
  • App reads Results outloud using AVSpeechSynthesizer
    var speechSynthesizer = AVSpeechSynthesizer() 

Credits

Devslopes CoreML Tuts

License

Standard MIT License

About

Ios app that identifies objects using coreML written in Swift4

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages