Skip to content

A CNN based human computer interface for American Sign Language recognition for hearing-impaired individuals

License

Notifications You must be signed in to change notification settings

susilnem/American-sign-Language

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

American Sign Language Detection using CNN (Convolutional Neural Network) and Deep Learning.

This is my final year project on American Sign Language uses Convolutional Neural Networks (CNN) to recognize and translate ASL gestures into written text and Speech. The CNN model is trained on a large dataset of ASL images, and the project includes a user interface, image processing module, and database of signs and translations. The project aims to make ASL more accessible and improve communication between the deaf and hearing communities.

Certainly! The goal of the project is to develop a machine learning system that can accurately recognize and translate ASL gestures into written text, making the language more accessible to people who are not familiar with it.

To accomplish this, we are using a deep learning model called a Convolutional Neural Network (CNN), which is well-suited for image recognition tasks like ASL gesture recognition. The CNN model is trained on a large dataset of ASL images to learn the patterns and features of different ASL gestures.

The project includes several components, including a user interface that allows users to make ASL gestures using a webcam or other camera device, an image processing module that extracts features from the captured images, and a deep learning model that predicts the corresponding text based on the recognized gestures.

Additionally, the project includes a database of ASL signs and corresponding text translations, as well as a training module that allows the deep learning model to be updated with new data.

Overall, the project aims to improve communication and accessibility between the deaf and hearing communities by providing a tool that can accurately recognize and translate ASL gestures into written text.

In order to Run the Code locally !

So basically You need to go to the folder Final Project -> Source Code and follow the instructions from Readme.file

Images

Outputs:

image1

image2

image3

image4

image5

Releases

No releases published

Packages

No packages published

Languages