Creating a two-way translating system for dumb and deaf using Microsoft Kinect motion sensor

This project will have a 2-way-communication method. 1) translation mode 2) communication mode. Which will be making communication possible between dumb & Deaf people and normal people. Microsoft Kinect Sensor will be fetching the gestures from the dumb and deaf person and then translating them

2025-06-28 16:26:01 - Adil Khan

Project Title

Creating a two-way translating system for dumb and deaf using Microsoft Kinect motion sensor

Project Area of Specialization Computer ScienceProject Summary

This project will have a 2-way-communication method. 1) translation mode 2) communication mode. Which will be making communication possible between dumb & Deaf people and normal people. Microsoft Kinect Sensor will be fetching the gestures from the dumb and deaf person and then translating them into text and on the way back it will be getting a set of texts from the normal person and translating it into an animated sequence.  In translating mode, the sign language will be sensed by the Kinect motion sensor and translated into words and audio form.  And in communication mode, the sentence typed into the system will be visualized by an animated character signaling the same sentence in sign language. As the problem is increasing and the people with such a problem are roughly suffering from it. This project will bring ease to the suffering people and they will be able to cope with their problems individually. In the initial stages, this project will require research and then the development of the system. Which will require the SDK provided by Microsoft for its sensor. A desktop application will be developed for the whole system which will bring all the objects of the project together on one platform. As far as portability is concerned it will cause a bit of a problem.

Project Objectives

The vision is very simple and straightforward and our world needs such a system that allows special people to feel the same way normal people feel. Moreover

  1. To design a system that fetches gestures

  2. To construct such a system that will be very easy to use

  3. To design a system that will convert text into animated gestures on the screen

  4. To make an efficient and effective usable system in terms of UI/UX

To arrange the system in such a manner that it would help anyone with such kind of condition

Project Implementation Method

This project is designed for daily base use like in government offices, medical-related and restaurants can also use it. Implementation of the project is designed in such a way that all the targeted areas should be covered and brief tutorial-based videos and pictorials must be shared by both end-users. Because special people should be tackled as all others. This will increase opportunities for all of the special people around the nation. So, keeping targeted places in my mind makes it very clear how would it be implemented into the market and where its impact factor would be the greatest. Systems implementation important requirements will be that 1. good light will be key and 2. manageable distance (sufficient space) from the sensor will be also demanded.

Benefits of the Project

The people with such a problem are highly suffering it. This project will bring ease to the suffering people and they will be able to cope with their problems individually. In the initial stages, this project will require research and then the development of the system. Which will require the SDK provided by Microsoft for its sensor. A desktop application will be developed for the whole system which will bring all the objects of the project together on one platform. As far as portability is concerned it will cause a bit of a problem. All government and private sectors will be easily accessible by the special people. As well as by the production of this project UN defined SDGs are also being coved.

Technical Details of Final Deliverable

The system within the in tacked hardware and software lets the user experience such an easy and accessible system that they can use and then communicate completely.

Mainly there are 2 main functions:

1-way where the Kinect sensor will sense the gestures produced in front of it and then comes the 2-way where the system will translate the text entered in the program and bring out an animated character which will be translating that entered text into sign language. What needs to be understood is that the solution provided to users is with help of an Infrared emitted camera and infrared depth sensor in this case Microsoft Kinect Sensor is being used which has a 92% accuracy rate, working along and diagnosing the person's gestures produced in sign language and then interpreting the whole system into text by the created data dictionary already fed into its database. The database will be designed in such a way that each gesture will be compared with a dictionary and then similar data will be picked and then displayed in text. For storage the concept of array and listing will be used whereas for fetching different searching procedures will be used like binary or exponential searching. An interesting thing about sign language is that it doesn’t have to help verbs which makes it easy for the system to just pick only the main words and convert them into the animation.

Final Deliverable of the Project HW/SW integrated systemCore Industry ITOther IndustriesCore Technology OthersOther TechnologiesSustainable Development Goals No Poverty, Zero Hunger, Industry, Innovation and Infrastructure, Reduced Inequality, Sustainable Cities and CommunitiesRequired Resources
Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Total in (Rs) 60000
Microsoft Xbox One Kinect Sensor – Black Equipment13000030000
Xbox Kinect Adapter for Xbox One S and Windows 10 - PC Microsof Equipment12000020000
Stationary Miscellaneous 11000010000

More Posts