Skip to content

kpiyush04/SiNext

Repository files navigation

SiNext: The next generation sign language

About Project

The project is based on the hand recoganization primarly devloped for HANDICAP PEOPLE who are unable to speak. The aim is to create a model that can speak up on behalf of the user to the smart assistants so that they can also communicate with them and they can also take advantage of modern technology like home automation. The model uses deeplearn-knn-image-classifier to classify the images and knn.predictClass to predict the signs. Model accuracy during the testing was 90.25%.

The application is that it will save the desired keywords for the gestures that he/she wants to say to the smart assistant to do or to ask the things. Then the model will train according to the gestures saved by the user. When it comes to detecting the gestures it will simply take the picture of the user doing the gesture with the help of a camera and convert it into pixels and calculate its threshold and matches with the trained data’s threshold and predicts the output for that particular gesture. The output will get converted into text andspeech so that smart assistants can listen to it as well as deaf people can also listen to it.

This project was developed by 6 students

  1. Ayush Gupta
  2. Shivanshu Bajpai
  3. Piyush Kumar
  4. Vrati Pandey
  5. Aryendra Prakash Singh
  6. Aryan

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published