Eye Gesture Controlled Wheelchair usingdeep learning
Summary The persons with quadriplagic disease and disabilities will find out the main purpose of designing this Automatic Wheelchair using Eye gesture. The system is very cost effective. The Eye gesture Controlled Automatic wheelchair using deep learning will be mo
2025-06-28 16:27:10 - Adil Khan
Eye Gesture Controlled Wheelchair usingdeep learning
Project Area of Specialization Electrical/Electronic EngineeringProject SummarySummary
The persons with quadriplagic disease and disabilities will find out the main purpose of designing this Automatic Wheelchair using Eye gesture. The system is very cost effective. The Eye gesture Controlled Automatic wheelchair using deep learning will be most effective to the patients who feel uncomfortable in society due to their disabilities. Face Detection using deep learning the face will be detected, once the face will be detected the eyes of the user will be extracted. Convolutional neural networks are the core concepts during the process of Face detection and Eye extraction. Eye gesture will determine the movement of the wheelchair.
This implemented system will allow the disabled persons to control the wheelchair without any assistance. For safety purposes the ultrasonic sensors will be mounted in front of wheel chair to deal with obstacles. For speed control the PWM drivers will be used. Thus the hand controlled wheelchairs are now useless. According to the report published by World Health Organization 15% of the world’s population is disabled. Our Eye Gesture Controlled Automatic Wheel Chair using deep learning will be very useful to the disabled persons.
The eye gesture controlled automatic wheel chair using deep learning is a valuable machine for the disabled persons and Quadriplegic patients. This wheel chair has been designed using the deep learning techniques. Deep learning make the computer able to understand a data of pictures. The purpose of designing these types of wheel chair is obvious as the number of patients which suffer different kind of spinal cord injuries and the Quadriplegic disease increased every year. Due to these injuries and diseases these persons are unable to move without any assistance in society.
But due to the paralysis condition many of the patients recovered their eyes movement and ---due to their improved behavior we have been encouraged to design the eye gesture controlled automatic wheel chair. The automatic Controlled wheel chair is comprised of the different basic concepts of deep learning and convolutional networks for the detection and recognition of eyes. After the eye recognition the signals from eyes have been transferred to the servo motors. To control the speed of servo motors we will use PWM drivers.Eye controlled chair comprised of an electric wheelchair, a webcam in front of the user's eye capturing eyeball movements with a low-cost Raspberry pi system, serially communicating with Arduino microcontroller to drive wheelchair in the desired direction. The transfer learning approach was adopted instead of traditional image processing techniques, making wheelchair more reliable and accurate. Keras deep learning pre-trained VGG-16 model led us to achieve excellent performance, with very little training dataset. Unlike conventional wheelchairs, presented methodology makes this wheelchair equally suitable for people wearing eye glasses.
Project ObjectivesProject Objectives
•Face detection
•Eye Extraction
•Blink Detection
•Image classification model & its training
•Hardware implementation
We have acheived our first four goals.And the implementation for the hardare is in process.
Project Implementation MethodThe design of the proposed system involved a camera placed in front of the user, which continuously read the current position of the eyeball. This information was sent to the raspberry pi/laptop, which made a decision by classifying the image input. Eyeball classification was done using the pre-trained deep convolutional neural network. Based on the specific eyeball position, a unique character was encoded to Arduino, which moved the wheelchair in the desired direction.
Implemented Algorithm
Face Detection:
The webcam continuously took input from the user, resized it, converted it from BGR (Blue, Green, Red) format to gray scale.Face detection could be done in many ways. We used OpenCV's Haar cascade as well as Dlib's face detector. It was found that Dlib's face detector was more accurate as compared to the other.
Facial landmark Detection
We detected salient facial structures on the face region to extract eye position from the face. This task was accomplished through facial landmark predictor included in Dlib library. Dlib facial structure detector is an implementation of [17]. Dlib pretrained facial points detector was then used to map 68 (x, y)- coordinates on the face.
Eye Localization
Unlike traditional eye blink detectors, we used algorithm of eye aspect ratio (EAR) deveoped by Soukupová, Tereza, and Cech [19] for eye blinks detection. The vertical and horizontal distances between the coordinates of an eye were useful for the purpose.
We used blink detection for initializing our system so that it start to take directive commands from the user.Fig. 5 shows a complete flow chart of the system. A webcam was placed in front of the user to read images that were converted into grayscale for face detection. Eyes were then localized in the image. When two blinks were detected, the system initializes to read eye movements as directive commands for the hardware system.
Classification Algorithm
To allow wheelchair to move in a specific direction, we have implemented a technique based on the latest research area i.e. deep learning. Deep learning has dominated the computer vision field over the last few years. Our wheelchair operates by getting a signal from the system as it classifies the user's eye into its respective position i.e. left, right, up, or middle.Transfer learning allowed us to reuse a pre-trained model which has been already trained on millions of images. By leveraging the pre-trained model for the classification of the eye, we have avoided the need of large training dataset. Transfer learning approach saved a lot of time and computational resources without affecting decision accuracy.
Training Data
Inputs from a user, by looking in the left-right middle and upward direction, were taken and placed in respective folders for the training dataset. Directories were made with Keras functions.
pre-Trained Model
VGG-16 was introduced by Simonyan and Zisserman [20].Vgg model was selected due to its excellent .
Benefits of the Project•EOG, EEG and IROG based methods are costly, inefficient and difficult-to-use methods.The combination of low cost and latest technology makes our wheelchair an appreciating effort for quadriplegic persons.
•It has dry cell batteries and there are no fumes coming out of it that will pollute the enviornment in any way.It is adding value to the society by boosting the confidence of patients suffering from spinal cord injuries,ALS etc.It has postive impact on the stakeholders as they are providing their services to a great cause which is not only economical but also has its pleasent effects on the society.
Technical Details of Final DeliverableFinal Deliverables
1) Facial Detection
Face detection could be done in many ways. We used OpenCV's Haar cascade as well as Dlib's face detector. It was found that Dlib's face detector was more accurate as compared to the other. The Dlib library uses a pre-trained face detector which is based on a modification to [16] for object detection.
2) Eye Extraction
The facial landmarks produced by Dlib function follows an indexable list. Each eye was represented by 6 x-y coordinates. As the index values were known, eyes were extracted effortlessly.
3) Blink Detection
Unlike traditional eye blink detectors, we used algorithm of eye aspect ratio (EAR) deveoped by Soukupová, Tereza, and Cech [19] for eye blinks detection. The vertical and horizontal distances between the coordinates of an eye were useful for the purpose.

Where p1,p2,p3,p4,p5 and p6 are facial landmarks detected in Fig. 2. We used blink detection for initializing our system so that it start to take directive commands from the user.Fig. 5 shows a complete flow chart of the system. A webcam was placed in front of the user to read images that were converted into grayscale for face detection. Eyes were then localized in the image. When two blinks were detected, the system initializes to read eye movements as directive commands for the hardware system.
4) Image Classification and its training
Data Training
Inputs from a user, by looking in the left-right middle and upward direction, were taken and placed in respective folders for the training dataset. Directories were made with Keras functions. Those functions helped to create a directory structure where the image of each class was saved on a subdirectory in the training and validation dataset. Images were then split into 175 (for training) and 75 (for validation).
Pre-Trained Model
VGG-16 was introduced by Simonyan and Zisserman [20]. VGG-16 model was selected as it showed excellent performance in terms of classification accuracy and flexibility for all types and levels of distortion as compared to other networks.VGG-16 model was initialized without the final fully connected layers. A data generator was created for training the images and was run on VGG-16 model to save all the features for training purpose, then a small fully connected model was trained on those extracted features to get classified output.
5) Hardware implementation using batteries, motors and camera etc.
This system was based on real time data acquisition operating system. Python based eye classification algorithm was successfully implemented both on laptop and raspberry pi 4b.Raspberry pi supports 32 GB of memory card.A small webcam mounted on a wheelchair took input from the user.
Python Bridge is a python application that is used to communicate with Arduino via pyserial function. Arduino nano.A potentiometer was connected to the analog input of Arduino to vary the speed of the left and right motor.A buck converter module was also used to step down 24V.
Final Deliverable of the Project HW/SW integrated systemCore Industry MedicalOther Industries Others Core Technology Big DataOther Technologies OthersSustainable Development Goals Good Health and Well-Being for PeopleRequired Resources| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 69950 | |||
| Respberry Pi 4B | Equipment | 1 | 18000 | 18000 |
| pi Camera | Equipment | 1 | 850 | 850 |
| Ardunio nano | Equipment | 1 | 1100 | 1100 |
| batteries | Equipment | 2 | 7000 | 14000 |
| wheelchair | Equipment | 1 | 36000 | 36000 |