Skip to content

tcassar/IRIS

Repository files navigation

IRIS

Winning project at the Sony X University of Manchester Hackathon, June 2022

Image with IRIS logo, and computer tracking visuals overlayed on a person


About

IRIS was built with the visually impaired in mind.

"OK, IRIS - describe my surroundings to me"

On hearing a prompt such as this, IRIS will use its onboard camera to take an image of your surroundings. It then gives you a narrated audio-visual (AV) description of what is going on around you through an earpiece.


What we did

As a team of four, with only seven hours to go from inception to implementation, we split the work effectively.

Ioan assembled the hardware (a Spresence board, WiFi module, camera and microphone), as well as working on producing a pitch to showcase the project. Tom Hewitt worked on the code running on the board, including setting up communications with IRIS's server. Lourenço trained a model to listen for the wake word, "Ok, IRIS", using Edge Impulse. Tom Cassar built a server which made calls to OpenAI's GPT3 and Google Cloud API to convert the raw data from the board into a narrated AV description.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published