Sign language system

Hand gesture is one of the method used in sign language for non-verbal communication. It is most commonly used by deaf & dumb people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems has been developed by many makers around t

2025-06-28 16:29:04 - Adil Khan

Project Title

Sign language system

Project Area of Specialization Information & Communication TechnologyProject Summary

Hand gesture is one of the method used in sign language for non-verbal communication. It is most commonly used by deaf & dumb people who have hearing or speech problems to communicate among themselves or with normal people. Various sign language systems has been developed by many makers around the world but they are neither flexible nor cost-effective for the end users. Hence in this paper introduced software which presents a system prototype that is able to automatically recognize sign language to help deaf and dumb people to communicate more effectively with each other or normal people. Pattern recognition and Gesture recognition are the developing fields of research. Being a significant part in nonverbal communication hand gestures are playing key role in our daily life. Hand Gesture recognition system provides us an innovative, natural, user friendly way of communication with the computer which is more familiar to the human beings. By considering in mind the similarities of human hand shape with four fingers and one thumb, the software aims to present a real time system for recognition of hand gesture on basis of detection of some shape based features like orientation, Centre of mass centroid, fingers status, thumb in positions of raised or folded fingers of hand.

Project Objectives

The objective of our Project is to provide an efficient and accurate way to convert sign language into text has aids for the hearing impaired for example, or enabling very young children to interact with computers (recognizing sign language), among others.

Project Implementation Method

We will try implementing such an application which detects pre-defined

Pakistan signed language (PSL) through hand gestures. For the detection of movement of

gesture, we would use cv2 library and an external camera as a hardware requirement is needed.

We have a MNIST dataset . So, our application will have two

main modules. The frontend will be built on PyQT5 which will comprise of two core module

one is that simply detects the gesture and displays appropriate alphabet. The second is after a

certain amount of interval period the scanned frame would be stored into buffer so that a string

of character could be generated forming a meaningful word.

Additionally, we are trying to build our own custom-based gesture for a special

character like period (.) or any delimiter so that a user could form a whole bunch of sentences

enhancing this into paragraph and likewise. Whatever the predicted outcome was, it would be

stored into a .txt file

Benefits of the Project

Stronger bond between parents and infants.

Improved spatial reasoning.

Enhanced ability to interpret body language. 

Better reaction times and peripheral vision.

Long-term cognitive benefits of learning sign language.

Technical Details of Final Deliverable

Some of the major problems faced by a person who are unable to speak are They

cannot express their emotion as freely in this world. Use that voice recognition and voice search

features in smartphone(s). Audio results cannot be retrieved. They are unable to use (Artificial

Intelligence/personal Butler ) like google assistance, or Apple's SIRI etc because all those apps

are based on voice controlling.

We will try implementing such an application which detects pre-defined

Pakistan signed language (PSL) through hand gestures. For the detection of movement of

gesture, we would use cv2 library and an external camera as a hardware requirement is needed.

We have a MNIST dataset . So, our application will have two

main modules. The frontend will be built on PyQT5 which will comprise of two core module

one is that simply detects the gesture and displays appropriate alphabet. The second is after a

certain amount of interval period the scanned frame would be stored into buffer so that a string

of character could be generated forming a meaningful word.

Additionally, we are trying to build our own custom-based gesture for a special

character like period (.) or any delimiter so that a user could form a whole bunch of sentences

enhancing this into paragraph and likewise. Whatever the predicted outcome was, it would be

stored into a .txt file

Final Deliverable of the Project Software SystemCore Industry ITOther IndustriesCore Technology OthersOther TechnologiesSustainable Development Goals Partnerships to achieve the GoalRequired Resources
Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Total in (Rs) 10000
Webcam Equipment11000010000

More Posts