The proposed project is a navigational system that would aid the visually impaired in finding their way through the obstacles, hazards, and dangerous landscapes. Stereovision technique is used to measure the distance between the user and any other detectable object such as a car, another person, a d
AI based Navigation System For blinds
The proposed project is a navigational system that would aid the visually impaired in finding their way through the obstacles, hazards, and dangerous landscapes. Stereovision technique is used to measure the distance between the user and any other detectable object such as a car, another person, a dog, or any other object of the 91 objects present in the navigation system’s catalogue. For the objects/obstacles that cannot be detected using the stereoscopic object detection algorithm, two strategically mounted ultrasonic sensors will be used. Anything like a staircase, uneven landscape, boulder, a curb, or any large contraption can be easily detected in this way.
The user can personalize the cane as it also utilizes a CNN based face recognition system that would recognize any person whose pictures it was trained on.
Some sensors are also used to efficiently translate the environment for the user like Moisture detector for slippery pavement, heat sensor to protect from fire hazards, and finally a light sensor to alert the user regarding lighting conditions.
The overall system will not initiate all the individual modules of the smart cane. The user will select his/her preferred mode which best suits the circumstance. For example; the users may want to recognize the people around them, so they may initiate the face recognition module. Similarly in case the user is moving the best choice would be to initiate the stereovision module.
The output will be a voice generated using the google text to speech module (GTTS). All the module outputs in this project will be voiced by GTTS.
All the algorithms and the sensors are being run and controlled by the Jetson Nano 2GB board and the Arduino Uno respectively. Both boards communicate serially.
Humanity by nature strives to realize a Utopian dream of justice, equality and welfare for all, with the advent of modern technology, the quality of welfare and its benefits has greatly increased especially in health sector. Scientists and engineers have made many devices that readily diagnose, aid or attempt to rectify certain condition, some examples of the latter being wheelchairs and hearing aids. However, there still remains a lot to improve in the field of ophthalmology when it comes to aid for the naturally blind.
Our aim is to make a device that is not only efficient in its function to provide assistance to the visually impaired but also affordable and easy to use. To understand the World from a blind persons perspective, it is sometimes useful to focus all our attention on the auditory, olfactory and physical stimuli. This gives us enough perspective of how confusing it might get when there is so much redundant information and noise pollution. Therefore, it is our foremost priority to make a system that communicates with the user timely and does not give redundant outputs that might confuse him/her. Concretely, our main design objectives are:
The project is mainly implemented on a controller and a boards acting as central processing unit i.e., the Arduino Uno and the Jetson Nano 2GB. The project was implemented in modules, some of them are controlled by the Arduino such as the sensors while the algorithms are run by the Jetson Nano. The Jetson Nano governs the Arduino in a way since the buttons to select the mode of working of the system are connected to former's GPIOs. Following is the illustration of the whole project:

Even though both of these utilize very different techniques, it is hard to focus on the stereoscopic vision at an object if it is not focused. Similarly, there no use of object detection if its proximity is not known especially in the capacity it is being used in this project. Firstly the object is detected by forward-propagation on the pretrained ImageNet weights, the detected object is then fed to the stereovision algorithm based on trangulation technique. The object is detected using both the cameras which are mounted 6 cm away from each other. The cameras were calibrated using the checkers board technique.
The face recognition module is utilizes the face_recognitioin library in the Jetson Nano. This library implements a face recognition function that convert the images into encodings and primarily uses CNN as its engine. The face encodings are then compared with the encodings saved in the face_recognition library.This way faces can told apart using their encodings.
The sensors are present at different locations on the cane, Water and heat sensors are present at and near the tip while the light detector is at the top for better function. The ultrasonic sensors are mounted such that the one sensor is slightly lower than the other. Both sensors send the data to the Arduino where the following comparison is made:
All the information will be voiced using GTTS.
This a step in aiding the blind using GPU powered technology, the step is already taken by Nvidia. No leading company other than Nvidia has set out to solve this problem. Nvidia, inspired from the sci-fi television series “Star Trek”, have made goggles that employ the deep learning method to make the user aware of his/her surroundings. The goggles are stereoscopic which mean they alert the user of the distance from the obstacle. Since the goggles are powered with GPU, complex algorithms can be run on them. The user can identify a person by name and gender which means there is a complex Facial Recognition algorithm running. This further adds lights to the fact that Nvidia has made a custom GPU and CPU just for this cause, the only negative factor it entails is the price of the device. While it is the most efficient device yet, not everyone can afford it.
Our device will be able to use the GPU technolgy (Jetson Nano 2GB board comes with GPU) to aid the blind at a lesser price. Later this can be further optimized by using FPGAs.
The project takes navigation away from trivial sensors and GPS modules that are only good at providing directions but not very good at translating the environment to the user.
The deliverable will be a smart walking cane powered by Jetson Nano 2GB board. The board utilizes an Nvidia Tegra GPU as well comes with a 2GB RAM. Further SWAP memory was created inside the boards memory card if needed. The GPU uses the CUDA technology which allows the user to utlize 128 GPU cores on top of the 4 CPU cores. Our openCV library was built accordingly so that it best utilizes the resources. The board requires a 5V 3A source.
The Arduino Uno controls the sensors using it digital I/O pins. It requires a 5V input.
These controllers will be placed inside the control box at the center of the cane. The cameras will peak out through the cane and all the wiring to the sensors will pass inside the hollow stick.
The whole stick will have a main power coming from a rechargable power bank which will be sufficient for our requirements.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Jetson Nano 2GB | Equipment | 1 | 18899 | 18899 |
| Arduino Uni | Equipment | 1 | 1990 | 1990 |
| Ultrasonic (x2), heat, water and light sensors | Equipment | 5 | 1500 | 7500 |
| Power Bank (3A, 5V) | Equipment | 1 | 4000 | 4000 |
| Stick | Equipment | 1 | 2000 | 2000 |
| Miscellaneous | Miscellaneous | 1 | 1500 | 1500 |
| Total in (Rs) | 35889 |
Coronaviruses are enveloped RNA viruses prevalent in birds and mammals including&nbs...
Millions of lives are lost every year as a result of inappropriate search and rescue missi...
A chiller is a mechanical device used to facilitate heat exchange from water to a refriger...
The antenna design is one of the most important factors to be considered in order to fully...
Lab Assistant portal is based on a real life problem faced by the university.The purpose i...