AI Based Sign Language Interpreter For Disabled Persons

Disabled persons often face communication problems while interacting with general public. Therefore, disabled persons use sign language to communicate with others. Main motivation behind this project is to design an artificial intelligence (AI) based robot that can work as interpreter between the di

2025-06-28 16:25:03 - Adil Khan

Project Title

AI Based Sign Language Interpreter For Disabled Persons

Project Area of Specialization Artificial IntelligenceProject Summary

Disabled persons often face communication problems while interacting with general public. Therefore, disabled persons use sign language to communicate with others. Main motivation behind this project is to design an artificial intelligence (AI) based robot that can work as interpreter between the disabled persons and the general public. It will use camera to acquire visual signals, interpret the symbols of American sign language using machine learning and convert it to English language text. Text will be further synthesized to speech. Proposed design will play the synthesized speech on the speaker and display it in text form on the screen. Real-time input from the camera will be acquired and fed into the Raspberry Pi.AI based sign language recognition will be performed to convert predicted signs to text and speech. Proposed interpreter will work in two modes. In one mode, blind and deaf persons will use proposed prototype to interpret the message from the others. In the second mode, disabled persons can use the proposed prototype to convey messages to the general public. Proposed project can be used in public places and academic institutions to facilitate disabled persons and minimize communication barriers.

Project Objectives

Major objectives of the proposed project are following:

Project Implementation Method

The proposed design consists of a robot that takes the input from a camera device to take the hand gestures as input so that the prediction of the sign language can easily be performed accordingly. The robot will work for the specific range to detect the sign and after the completion of the predicted work will be displayed on the screen, and audio will be played on the speaker. The main blocks of the proposed design methodology haveshown in the block diagram and are described below:

Camera

The camera will be interfaced with the Raspberry Pi to take the input and can perform further tasks. Input from the camera will be in the form of hand gestures in the form of the frame where the hand will be segmented out afterward by ignoring the background to get the features of the hand. After acquiring the image, the image is flipped before passing it to the embedded system, and then it is fed to Raspberry Pi for pre-processing, segmentation, and for the implementation of the artificial intelligence algorithm.

Embedded System

The Embedded system is responsible for acquiring the visual inputs signals from the user for the processing of AI models and making predictions according to the respective gesture. Raspberry Pi will be used for processing of visual data and AI based prediction. Embedded System also interfaces the display screen and speaker for displaying the gesture in the form of text and play it back as speech for blind people.

Data Acquisition and Pre-Processing

Visual images of sign symbols are acquired using camera and preprocessing is performed on it. First, the acquired images are flipped to handle both left and right hand symbols. Next, segmentation is performed to segment out the user hand. Also, there is an area size threshold to limit the distance from the camera for precise results. During hand segmentation, hand gestures is isolated from the entire image by subtracting the background from the frame.

AI-Based Sign Recognition

After having the input from the camera, the hand will be segmented out, and then by using the sign language dataset to compare the relevant results in parallel with the input. Classifier models are used in such a way to train the dataset for getting some better results at the output. Once the model is trained, it acquires the visual inputs, compares this input with the data set, and predicts accordingly. 

Speech-Synthesis And Display

For assisting the disabled persons when the prediction is completed and results will be by the requirement of the project prototype. Two different modes of output are selected to present our results. The output will be shown as a display for the deaf community.Output of the KNN model will be check by the auto-correct library if the output word need to be corrected or not. If it needs to be corrected, auto-correct library will have corrected the word and then it will be converted into speech.

'AI Based Sign Language Interpreter For Disabled Persons' _1659397376.png

Figure 1: Block Diagram of the Proposed Sign Language Interpreter

Benefits of the Project

Major benefits of the project are:

Technical Details of Final Deliverable

The technical details and final deliverable of the project are given as follow:

Final Deliverable of the Project HW/SW integrated systemCore Industry HealthOther Industries IT Core Technology Artificial Intelligence(AI)Other Technologies RoboticsSustainable Development Goals Good Health and Well-Being for PeopleRequired Resources
Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Total in (Rs) 69000
Raspberry Pi Equipment12500025000
Raspberry Pi Camera Equipment130003000
Speaker Equipment120002000
Monitor Equipment11000010000
Robotic Structure Equipment11500015000
keyboard Equipment120002000
Mouse Equipment120002000
Miscellaneous Miscellaneous 11000010000

More Posts