Skip to content
Kylie Kalik edited this page Mar 11, 2020 · 1 revision

The Mission Vocable will allow those with conditions such as MS, stroke, als, spinal cord injury, and many others to communicate using an app that will track head movements. We want to empower and bring quality of life equity to those that can’t afford competitors to communicate with care takers and loved ones.

Note: The app was originally called Eyespeak. The internal team working on the product changed the name to Vocable February 2020 to help with the app store submission process as well as prevent any issues of copyright infringement.

Eyespeak v1 Technical Approach Eyespeak will be an iOS-based program that utilizes ARKit2 and the iPhone X's TrueDepth camera to translate eye movement to the movement of a "cursor" on-screen. A UI will act as the input and most likely appear as a modified keyboard. User's will be able to input their intent and active text-to-speech with another UI element. This is the primary functionality although there will be other features to improve the experience. The iPhone X's screen size is a limitation to the quality of the experience so a tablet-sized device with TrueDepth capabilities or an eye-tracking solution that doesn't require the lookAtPoint or left or right eye transform properties would be ideal as the project progresses. Currently, we are not having too much success with the eye tracking so we are currently mapping the cursor to head movement which is very successful. The issue with the eye tracking is that the subtleties of the eyes aren't reflected accurately or smoothly on-screen. Up and down movement is better than left-to-right but it is still not ideal. One thing to explore is creating our own cursor point using the leftEyeTransform and rightEyeTransform properties rather than using the lookAtPoint average. Others online exploring this technology have echoed the opinion that lookAtPoint is poor at accurately tracking the eyes. Tracking head movement is obviously a poor long-term solution but this will be: 1) an opportunity to show progress and create interest in the project; 2) data to help us learn how to improve our method of eye tracking. One other idea down the road is to implement machine learning in the form of Core ML or a library like TensorFlow to better calibrate the confluence of facial position, face type, and eye position.

Clone this wiki locally