Skip to content

A Computer Vision based project that uses CNN to translate American Sign Language(ASL) to text and speech

Notifications You must be signed in to change notification settings

Bishalsarang/The-Sign-Language-Platform

Repository files navigation

Sign-Language-Platform

ASL translator using CNN. Currently, it has been trained for 10 letters only ['A', 'C', 'E', 'H', 'I', 'L', 'O', 'U', 'V', 'W']

What it does?

It translates American Sign Language from live webcam to text and then to speech.

Creators:

  1. Prajwol Lamichhane
  2. Pratik Rajbhandari
  3. [Abhay Raut]
  4. Bishal Sarangkoti

Screenshots:

enter image description here

Requirements:

Python 3.6 64 bit (Python 3.7 is not officially supported by tensorflow)

For Anaconda Users: You can download and import the virtual environment file to anoconda environment "tensorflow_env.yml" which install all the libraries neeeded for project.

Other users can install all requirements from "requirements.txt" file

pip install -r requirements.txt

Configuring paths to run the translator:

  1. Download pre-trained model from here
  2. Modify MODEL_PATH from variables.py

Running translator.py After installing all the requirements in your system environment or virtual environment run the translator directly Download model and set MODEL_PATH Usage:

  1. Translate from webcam
python translator.py 

Controls: Press n to append current letter
Press m for space
Press d to delete last letter from sentence
Press s to speak the translated sentence
Press c to clear the sentence
Press ESC key to exit

Configuring paths to run ASL.ipynb

  1. Download datasets from here or create your own
  2. Modify TRAIN_DATA_PATH and TEST_DATA_PATH
  3. Train the model
  4. Your model is saved as withbgmodelv1.h5
  5. Use the model to run translator.py by configuring the path in variables.py