Autonomous navigtion of automobiles is a long standing problem. Navigation task requires heavy and expensive sensors like lidar, and kinect to percieve the environment. These sensors drain a lot of power. However, single cameras are cheap and found on most quadcopters today
Autonomous Obstacle Avoidance with Monocular Percept
Autonomous navigtion of automobiles is a long standing problem. Navigation task requires heavy and expensive sensors like lidar, and kinect to percieve the environment. These sensors drain a lot of power. However, single cameras are cheap and found on most quadcopters today. Repurposing these cameras for depth estimation of a scene is a useful feature to have. Our method uses deep learning to train a camera to percieve distances from obstacles from a single image, exploiting Stucture from Motion techniques. This estimated depth map of the scene is used to determine steering commands for a quadcopter to avoid obstacle collision for safe driving.
The objective of my project is to train an obstacle avoiding deep network that is fast and accurate. It should be fast enoght for real time response on a quadcopter, and accurate enough for it to be practical.
The project will be implemented with a layer of deep convolutional neural network for depth estimation with supervised learning in a simulation environment. The simulation environment of choice is AirSim by Microsoft. This depth map is further passed into another network that estimates the navigation command for the quadcopter. This will be trained in AirSim with reinforcement learning.
Monocular camera based navigation is a low cost, power efficient, and lightweight alternative to conventional lidar sensor and kinect camera percept used for obstacle avoidance. This improvement will make quadcopters cheaper and more accessible and promote autonomy and safety in navigation.
The network will be trained on a gpu in a simulated environment, but it is very generalizable and the learning can be transferred to real environment. Deploying it on a quadcopter with real time response would require gpu embedding on the quadcopter for fast response,or alterbnatively remote processing can employed. The quadcopter will transmit the state of environment captured through the camera to a remote server over network. This image is processsed on a gpu enabled system and a steering command is returned in real time.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| NVIDIA GTX 1070 | Equipment | 1 | 60000 | 60000 |
| SSD 120gb | Equipment | 1 | 10000 | 10000 |
| IEEE & CIS membership | Miscellaneous | 1 | 5000 | 5000 |
| Total in (Rs) | 75000 |
The certification is one of the main sources of recognition of one?s work but how do we ma...
The mango is known as a flavorful fruit and king of the fruit all over the world due to it...
Texturing is a phenomenon used to enhance the tribological properties of the material. For...
The objective of this paper is to design, develop and monitor ?Tablet filling and capping...
This FYP is based on the Case study of antenna azimuth position control system which is in...