Deep Learning Based Human Motion Recognition
In Pakistan the existence of high prevalence of deaf and dumb disability and tremendous lack of making preventive and curative solutions for sign language conversion. The use of robotic and simulation technologies has proven themselves to be worthy components of the aforementioned problem. This has
2025-06-28 16:26:05 - Adil Khan
Deep Learning Based Human Motion Recognition
Project Area of Specialization Artificial IntelligenceProject SummaryIn Pakistan the existence of high prevalence of deaf and dumb disability and tremendous lack of making preventive and curative solutions for sign language conversion. The use of robotic and simulation technologies has proven themselves to be worthy components of the aforementioned problem. This has brought forth motivation for the development of a humanoid robot as an avatar in the context of real time sign language translator for deaf and dumb.
Generating and detecting complete sentences of sign language and implementing on hardware is a very complex problem, on one hand, to convert Speech into text and sign language, on the other hand, to detect or analyze the sign language and converted back to the speech. Taking inspiration from the tremendous advancements in the field of AI we focus on Machine learning and Convolution Neural Networks (CNN). In short, we want the machine to learn the modeling of sign language knowledge and use that knowledge to generate and detecting the signs language.
Project ObjectivesOne of the major shortcomings of society is the social barrier that is created between disabled persons and persons who are blessed with all their human faculties in order. One of the major barriers a disabled person faces in his life is inability to communicate with a normal person. Communication, which is the basis of human progress, often tends to be an obstacle for those deaf and dump people who are unable to articulate their thoughts.
To be more specific, we use the AI to generate Pakistan Signs Language for special persons. Thus, our ultimate Objectives are four-fold:
- Take input speech from normal person.
- Give output in complete sentences into sign language.
- Collects the sign signals from impaired person.
- Give output to normal person through speaker.
Our aim is not only come up with theoretical results, but also, practical solutions for the deaf and dumb community.
Project Implementation MethodCreating the Structure: Making the structure of the Robot, humanoid Robot structure that have hands, legs, head, so that it can perform works like human.
Creating the Deep learning algorithm: We need to creat an Deep learning algorithm which will help this Robot to perform act as a moderator between normal to special person and taskes that assign to it, so that it can do any human-like work.
Implementing the Deep learning algorithm: Implement the Deep Learning algorithm onto robot, so that we can test the algorithm is working on the real time and how well is model performing.
Benefits of the ProjectAs this projects aims to be software and hardware, the results of our project will greatly benefit the educational institutes. However, the product is designed in a way that it will act as moderator between normal to special persons. This could initiate the robotics interest in academics and would lead to industry investment in this area.
Advances in technology continue to push the envelope in healthcare, travel, communication and education. The use of robotic and simulation technologies has proven themselves to be worthy components of available educational resources.
Students with special needs are getting towards a new level of learning through the use of robotics in the educational institutes. With the help of these technologies, students with autism and some developmental issues can be developed better communication and social skills. These Robots are also offered as a source of constant companion and health monitoring system. To suit each individual child's needs these robots can be programmed according to them, offering special education in a much simpler and accessible format.
Technical Details of Final DeliverableThis robot will be able to perform any task assign to it, like picking anything.It will also communicate naturally with the special person by synthetic speech and flexibly produce sign language with arm gestures at run-time, while not being limited to a predefined repertoire of motor actions.
Final Deliverable of the Project HW/SW integrated systemCore Industry OthersOther Industries Education Core Technology Artificial Intelligence(AI)Other Technologies RoboticsSustainable Development Goals Industry, Innovation and InfrastructureRequired Resources| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 80000 | |||
| 3D structure parts | Equipment | 2 | 15000 | 30000 |
| camera module | Equipment | 1 | 2000 | 2000 |
| Embedded boards | Equipment | 2 | 19000 | 38000 |
| Miscellaneous Report Print, Poster etc | Miscellaneous | 1 | 10000 | 10000 |