Skip to content

nostalgia-cnt/vibe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vibe

                    Python Dependencies GitHub Issues Contributions welcome License

The Vibe API is an all-purpose eye tracking web app and API for Alzheimer's disease research.

This was made on behalf of the 2021 CNT Hackathon at the UW Center for Neurotechnology and received a 1st place prize.

Getting started

This will run everything in localhost to show you the demo:

git clone git@github.com:nostalgia-cnt/vibe.git
cd vibe
pip3 install -r requirements.txt
python3 app.py

Now go to https://0.0.0.0:5000 in browser and you can proceed with a demo.

Note that this works best on linux operating systems and there may be some multi-threading errors on mac operating systems.

Demo

You can watch a short demo by clicking here or the gif below:

Vibe demo

Problem

According to the World Health Organization (WHO), a new case of dementia is diagnosed every 3 seconds. That’s 28,800 per day or over 10 million people per year.

  • Late diagnosis / poor outcomes - AD is present up to 20 years before the disease is manifested.
  • Early treatment is helpful to delay the progression to improve outcomes and lower costs.
  • Strong need to detect AD earlier on, allowing for earlier therapies to slow decline of AD symptoms, improve treatment outcomes and ensure patients a greater quality of life.

Solution: eye tracking biomarkers

Eye tracking has emerged as a low-cost, noninvasive tool to diganose and track Alzheimer's disease symptoms.

Eye movements and pupillary reflex have been used for several decades in neurological disease research. Careful examination of both allows to probe the medial temporal lobe memory system, the cholinergic neuronal pathways, the progressive neuropathological changes within the neocortex, and the brain dopamine activity.

Below are several studies that indicate the effectiveness of using visual biomarkers for characterizing Alzheimer's disease, MCI, and controls:

Tasks

Upon reviewing the literature (see above), we built a custom protocol with four tasks that matches onto oculomotor symptoms/memory deficits present in MCI and Alzheimer's disease patients.

1. baseline process

We created a simple baseline process for users to look to the right, down, left, up, and center to help build a regression model on your own eyes using Eyegazer API.

2. picture task

Two black and white pictures were selected to prevent color from affecting the stimulus, and the participant is asked to focus in on the rooster.

3. sentence reading task

The grandfather passage is a standard passage for speech-related research and covers all the major phonemes in the English language and has been proven to work on this population. Therefore, we used this passage and a countdown timer to get a window into how this task may affect cognitive impairment.

4. video task

We selected a complicated video in black-and-white as a design consideration for AD and related disorders.

Reports

After you complete all the tasks, you get a report like the one below showing your X and Y position of each task and how they compare to the average / general population.

Potential confounders

It's of note that using eye tracking features for diagnosing AD can have multiple confounders including:

  • Oculomotor abnormalities - Oculomotor abnormalities may exist in other diseases and so algorithms may have false positives if these diseases are not screened out at onset of the screen (e.g. Multiple System Atrophy has slower prosaccade and increased antisaccade errors).
  • Aging affects saccades - Eyesight is affecting as you age (e.g. visual acuity changes to have worse vision), as well as reaction time, and so these things may bias or confound any diagnostic marker for the eyes.
  • Less controlled setups - Many different setups can affect the recording of eye conditions including brightness, corrective lenses, orientation of the face / angle, and ambient visual noise.

We hope to address these confounds with additional experiments into the future.

Contributors / Acknowledgements

First, we'd like to acknowledge the UW Center for Neurotechnology for putting on an awesome virtual hack-a-thon during COVID-19! Without your support none of this work would have been possible - including coordinating user interviews and giving feedback on our report structure.

Second, we would like to sincerely thank the following contributors to this repository:

  • Jim Schwoebel - created the core survey structure / back-end in Flask/python.
  • Keaton Armentrout - helped to get Eyegazer.js up-and-running / recording (through a PR).
  • Pamel Kang - helped to coordinate what report structures should look like / mock-ups.
  • Jayant Arora - helped with selection of images and other protocol design elements.

Pull requests are welcome if you'd like to expand our work!

License

The code in this repository is licensed by the GPL3 License. Because Webgazer.js is distributed under this license and it is incorporated, we must license it under this license (we'd prefer Apache 2.0 license to make it more widely distributed but we can't).

Some notes:

  • Webgazer.js has custom licensing, so check that out if you want to use this commercially.
  • Images used in the protocol licensed under creative common license using Google search.
  • Logo was built using Inkscape and is licensed under the Apache 2.0 license.

References

Some other repositorities that you may like to look into include:

Research papers that are useful in the space include:

The most commonly used screening tools for AD diagnosis:

Tasks assets (if you want to replicate our work):