Sign Language Recognition Using Dynamic Hand Gestures By Deep Learning
Our project aims to create a computer application and train a model which when shown a real time video of hand gestures of American Sign Language shows the output for that particular sign in text format and make sentances on the screen.
2025-06-28 16:29:04 - Adil Khan
Sign Language Recognition Using Dynamic Hand Gestures By Deep Learning
Project Area of Specialization Computer ScienceProject SummaryOur project aims to create a computer application and train a model which when shown a real time video of hand gestures of American Sign Language shows the output for that particular sign in text format and make sentances on the screen.
Project ObjectivesThe project will be able to translate signs into text and create sentences in real-time through camera.
It would be able to accurately recognize signs and convert them into their respective text and combine the text into sentences.
While creating sentences, it will accurately convert the texts into sentences by using an auto-spelling dictionary.
Project Implementation MethodData Acquisition
Data Pre-processing
Feature Extraction
Gesture Classification
Creating Model for Gesture Recognition
Training Model
Saving Trained Model
Benefits of the ProjectTo provide better community services, we will aim to create a system to achieve that. Together we build a caring community with: ? Self-sufficiency ? Dignity ? Harmony ? Happiness A communication barrier built between regular people and D&M individuals in the shape of a sign language architecture that is distinct from standard sentence or text. As a result, they rely on vision-based signaling to interact. Daily hundreds of signers struggle to communicate with other people, strangers and people who speak. To ensure that everyone can understand each other with no misunderstandings, we will develop an upgradable sign language translator.
Technical Details of Final Deliverable- The project would use real-time video and extract images and extract features.
- Our project will classify between 32 symbols( ASL Alphabet, Words, Blank) and then classify between similar symbols then output the corresponding text by using CNN model.
- It will then convolute the images using max pooling and increase the image quality data step by step and then flatten the last output image to produce output.
- Using autocorrect features, the project would create sentences with less errors and better sentence forming accuracy.
- Our project will predict the right text with the highest possible accuracy up to 96% and use those text to create sentences.
- It will filter, classify, and then produce high quality images that will make it easier to produce better accuracy.
- The accuracy is increased from 91.7% to 96% by creating a layer separately where similar symbols are being compared with high quality datasets.
- Using high input image size and applying a gaussian filter ensures that it becomes easier to predict text by symbols provided in that image.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 46500 | |||
| EGPU | Equipment | 1 | 40000 | 40000 |
| logitech Camera | Equipment | 1 | 3500 | 3500 |
| Stationery | Miscellaneous | 15 | 100 | 1500 |
| Wires | Equipment | 5 | 300 | 1500 |