Skip to content

M-Davies/eye-of-horus

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

eye-of-horus

A face and gesture recognition authentication system, the eye of horus is my Final Year University of Plymouth Project that will account for 40 credits of my grade this year. My supervisor for this project is Professor Nathan Clark.

Table of Contents

Vision

For: Users looking to protect their assets within safes or rooms and need a more reliable and secure solution than an easily stolen key or forgetful passphrase.

Whose: Problem of those who forget their forms of authentication or wish for only a single group of authorised people access to an asset.

The: Eye of Horus.

Is A: Facial and gesture recognition authentication system.

That: Combines two common biometric security features into one camera. It provides a secure, helpful authorisation system into a room, safe, house, you name it! Without the need to remember keys or passphrases, this camera uses facial recognition to recognise you as an authorised party to the asset as well as gesture recognition to accept a premade unlock pattern. To lock again, simply exit the asset and commit your locking gesture while looking up at the camera, exactly the same as what you’ve already done! Forgot either gesture? No problem! Simply recover your gesture with a good picture of yourself or another combination to reset your gesture there and then so you can make a new one!

Components

Image of the project structure

This project consists of two major components. Those wishing to use the features of this project for development of their own project/products would be more interested in the Python Library, where those who are just looking to play around and see how this could work in a production environment would be better suited looking into the Website frontend side of things.

Website

This is a simple ReactJS frontend, NodeJS backend website used to showcase how to this application could work in a production environment. The website server utilises the python library to handle most of the work, just parsing and returning the responses, but could be a full blown recognition server too if setup appropriately.

Tests are directory based, meaning there is one folder for running all backend and frontend tests against the website using Jest. To run the tests, execute npm run jstest from repository root.

Python

This is a collection of python and shell scripts that handle, parse and execute queries and responses to and from the various AWS services listed above. After each action, a response.json file is produced that details what happened during the action. Check out the README.md at src/scripts for more information.

Tests are directory based, meaning there is one folder for running all backend and frontend tests against the website using pytest. To run the tests, execute npm run pytest from repository root (they will take some time due to the custom labels project having to boot up and shutdown after use).

Installation

Architecture

Everything on this project was built inside a Mac Catalina (and then Big Sur) operating system, the author cannot guarantee that this application will work on another system.

However, all of the software used does work on Windows and Linux and it should be possible to adapt your approach accordingly. Please contact the author if you need assistance on the matter.

Your device should also be using the latest version Google Chrome for the website (not needed for the python library) and must have a camera (any webcamera should do).

Method

From the repository root in a terminal window...

  1. Install Node & Python
  2. Install dependencies
npm install
cd src/server && npm install
cd src/scripts && pip install -r requirements.txt
  1. Install G-Streamer (only if you wish to use the streaming feature of this project)

  2. Create Local Environment OR Import secrets

    • Ideally, you should create your own local environment. The author used a set of tutorials to setup the AWS services necessary for the completion of this project, tracked by the first work item in this repo. You are free to ask questions about the process as you see fit but the author cannot guarantee success or support on the matter. Once you are done, follow the instructions below about populating the .env file with your secrets.
    • Otherwise, to communicate with the author's AWS service, create a .env file at the repo root that is a copy of the provided example file. Populate the template values with the ones the author has given you. Please request access to the values by opening a new issue here along with a reason of why you need them and a secure contact method. Secrets provided in this manner can be revoked at anytime and the author reserves the right to do so without notice. These are personal keys and should not be shared at all. Finally, ensure boto3 can communicate with AWS by following this guide (the authors region is eu-west-1).
  3. Start the custom labels project (this can take up to 20mins)

python src/scripts/gesture/gesture_recog.py -a start
  1. You're good to go! The python scripts are now ready to use. To use the website:

    • Start the backend nodejs server. From src/server, execute npm run start
    • Open a separate terminal window. Execute npm run start and wait for the message Compiled successfully!. Finally, navigate to http://localhost:3000/ in Chrome to view the frontend of the website.
    • Inside the website, you will be presented with the home page where you will need to enter a username for a new or existing account (the system will automatically detect which is which). Username's must be alphanumeric.
  2. After you have finished development or investigation, please ensure to run python src/scripts/gesture/gesture_recog.py -a stop to stop the custom labels project. Failing to do so will result in additional financial charges as you are billed while it is in "active" state. For more information, see Inference Hours.

Management

Due to university regulations on final year projects, my non code backlog is recorded on Microsoft Planner. Please click the link below to request access to the planner: https://outlook.office365.com/owa/eyeofhorusPlanner@group.plymouth.ac.uk/groupsubscription.ashx?action=join&source=MSExchange/LokiServer&guid=a322b80f-c38a-44dd-b956-e9b43f82ec87

Code related tasks are recorded on the GitHub Project Board to allow for greater automation and drawing of references between tasks. See active and past sprints (that are tied to the board and individual tasks) by viewing the project milestones.

Concessions

These concessions are known bugs/problems with either the website, python library, aws services or all or some of the above.

  • Facial Recognition: The python library employs counter presentation attack tactics to ensure a picture of a user's face cannot be used to authenticate without their permission. Inevitably, this may result in false positives where the system thinks your real face is a fake image being shown to it. Please ensure that when authenticating with the system, you position your face to the same position as your stored face as close as humanly possible. If that doesn't work, ensure that you are using the same device to authenticate as you did to take the stored photo and adjusting your background to be less cluttered.
  • Gesture Recognition: Gesture recognition is plagued by inaccuracies due to how similar each gesture type is to each other. If your gestures are being identified incorrectly (or not at all), try following the steps at the end of the training notes document.