Winning project at the Sony X University of Manchester Hackathon, June 2022
IRIS was built with the visually impaired in mind.
"OK, IRIS - describe my surroundings to me"
On hearing a prompt such as this, IRIS will use its onboard camera to take an image of your surroundings. It then gives you a narrated audio-visual (AV) description of what is going on around you through an earpiece.
As a team of four, with only seven hours to go from inception to implementation, we split the work effectively.
Ioan assembled the hardware (a Spresence board, WiFi module, camera and microphone), as well as working on producing a pitch to showcase the project. Tom Hewitt worked on the code running on the board, including setting up communications with IRIS's server. Lourenço trained a model to listen for the wake word, "Ok, IRIS", using Edge Impulse. Tom Cassar built a server which made calls to OpenAI's GPT3 and Google Cloud API to convert the raw data from the board into a narrated AV description.