iOS app that identifies objects using coreML written in Swift4
Built with
- Ios 11
- Xcode 9
SquezeNet.mlmodel Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, person, etc.
import AVFoundation //Work with audiovisual assets, control device cameras, process audio, and configure system audio interactions.
import CoreML //handles Machine-learning
import Vision //handles Face-Object recognition
- Access Camera
var captureSession: AVCaptureSession! ... captureSession = AVCaptureSession()
- Capture Photo and Save it in small preview
var cameraOutput: AVCapturePhotoOutput! ... cameraOutput.capturePhoto
- Control camera Flash
let flashControl = AVCapturePhotoSettings() flashControl.flashMode = .on
- Tries to Identify Object we take a Picture of
(request: VNRequest) results = request.results as? [VNClassificationObservation] //"VNClassificationObservation" observes what was in the image
- Shows accuracy of Object identification
classification.confidence
- App reads Results outloud using AVSpeechSynthesizer
var speechSynthesizer = AVSpeechSynthesizer()
Devslopes CoreML Tuts
Standard MIT License