Skip to content

sushant-bhattarai/sign-language-recognition-major-project

Repository files navigation

American Sign Language Recognition Using CNN

What I did here

  1. The first thing I did was use OpenCV to build 26 gesture samples. I captured 1200 50x50 pixel photos for each gesture. All of these pictures were grayscale, and they were saved in the gestures/ subdirectory. I didn't post the gestures folder to GitHub because of its large size. The gestures can be found here . flip_images.py was used to flip the images. Every image in this script is flipped vertically. As a result, each gesture contains 2400 pictures.
  2. I then understood what a CNN is and how it functions.
  3. Keras was used to create a CNN. If you wish to add more gestures, you'll probably need to create your own layers and change some parameters on your own.
  4. Then, on a video stream, I utilized the model that was trained with Keras.
  5. I have 26 gestures that are alphabets from A to Z that I have stored as of now. These pictures were used to train the model.

There are a lot of details I skipped over. However, they are the most important and fundamental steps.

Requirements

  1. Python 3.7
  2. Tensorflow 2.7
  3. Keras 2.7
  4. OpenCV 4.5.4.60
  5. h5py
  6. pyttsx3
  7. PyQt5 A solid understanding of the five topics listed above, as well as neural networks. If you're having trouble with them, look them up on the internet. I'm only a novice in those areas. A good CPU (preferably with a GPU).

Installing the requirements

  1. Start your terminal of cmd depending on your os.
  2. If you have a NVidia GPU then make sure you have the prerequisites for Tensorflow GPU installation (Refer to official site). Then use this commmand
pip install -r requirements_gpu.txt
  1. In case you do not have a GPU then use this command
pip install -r requirements_cpu.txt

How to use this repo

I have made a GUI for setting up histogram, recognizing single gesture and wording those characters and converting them to speech.

Creating a gesture

  1. First set your hand histogram. You do not need to do it again if you have already done it. But you do need to do it if the lighting conditions change. To do so type the command given below and follow the instructions below.
python 5_set_hand_hist.py
  • A windows "Set hand histogram" will appear.
  • "Set hand histogram" will have 50 squares (5x10).
  • Put your hand in those squares. Make sure your hand covers all the squares.
  • Press 'c'. 1 other window will appear "Thresh".
  • On pressing 'c' only white patches corresponding to the parts of the image which has your skin color should appear on the "Thresh" window.
  • Make sure all the squares are covered by your hand.
  • In case you are not successful then move your hand a little and press 'c' again. Repeat this until you get a good histogram.
  • After you get a good histogram press 's' to save the histogram. All the windows close.
  1. I already have added 26 (A-Z) gestures. It is on you if you want to add even more gestures or replace my gestures. Hence this step is OPTIONAL. To create your own gestures or replace my gestures do the following. It is done by the command given below. On starting executing this program, you will have to enter the gesture number and gesture name/text. Then an OpenCV window called "Capturing gestures" which will appear. In the webcam feed you will see a green window (inside which you will have to do your gesture) and a counter that counts the number of pictures stored.
python extra1_create_gestures.py   
  1. Press 'c' when you are ready with your gesture. Capturing gesture will begin after a few seconds. Move your hand a little bit here and there. You can pause capturing by pressing 'c' and resume it by pressing 'c'. Capturing resumes after a few secondAfter the counter reaches 1200 the window will close automatically.

After capturing all the gestures you can flip the images using

python extra2_flip_images.py
  1. When you are done adding new gestures run the load_images.py file once. You do not need to run this file again until and unless you add a new gesture.
python 2_load_images.py

Displaying all gestures

  1. To see all the gestures that are stored in 'gestures/' folder run this command
python 1_display_all_gestures.py

Training a model

  1. So training can be done with Keras.
python 3_cnn_keras.py
  1. If you use Keras you will have the model in the root directory by the name cnn_model_keras.h5.

You do not need to retrain your model every time. In case you added or removed a gesture then you need to retrain it.

Get model reports

  1. To get the classification reports about the model make sure you have test_images and test_labels file which are generated by 2_load_images.py. In case you do not have them run 2_load_images.py file again. Then run this file
python 4_get_model_reports.py
  1. You will get the confusion matrix, epoch vs loss graph, epoch vs accuracy graph, f scores, precision and recall for the predictions by the model.

Testing gestures

I ended up using Keras' model, as the loading the model into memory and using it for prediction is super easy.

  1. First set your hand histogram. You do not need to do it again if you have already done it. But you do need to do it if the lighting conditions change. To do so type the command given below and follow the instructions below.
python 5_set_hand_hist.py
  • A windows "Set hand histogram" will appear.
  • "Set hand histogram" will have 50 squares (5x10).
  • Put your hand in those squares. Make sure your hand covers all the squares.
  • Press 'c'. 1 other window will appear "Thresh".
  • On pressing 'c' only white patches corresponding to the parts of the image which has your skin color should appear on the "Thresh" window.
  • Make sure all the squares are covered by your hand.
  • In case you are not successful then move your hand a little bit and press 'c' again. Repeat this until you get a good histogram.
  • After you get a good histogram press 's' to save the histogram. All the windows close.
  1. For recognition start the recognize_gesture.py file.
python 6_recognize_gesture.py
  1. You will have a small green box inside which you need to do your gestures.

Using 7_text_speech_and_wording.py

  1. First set your hand histogram. You do not need to do it again if you have already done it. But you do need to do it if the lighting conditions change. To do so type the command given below and follow the instructions below.
python 5_set_hand_hist.py

Perform same steps involved during setting hand hsitogram. 2. Start the file.

python 7_text_speech_and_wording.py
  1. In text mode you can create your own words using fingerspellings or use the predefined gestures.
  2. The text on screen will be converted to speech on removing your hand from the green box
  3. Make sure you keep the same gesture on the green box for 15 frames or else the gesture will not be converted to text.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages