Self Driving car using Lidar sensing and image processing technology
The term self-driving car means ?a car which is able to drive itself without any human operator?. The self-driving car is a car that is driven on a pre-defined path with the help of sensors. It has been developed to support human beings and reduce road accidents caused by human error over the past d
2025-06-28 16:34:56 - Adil Khan
Self Driving car using Lidar sensing and image processing technology
Project Area of Specialization Artificial IntelligenceProject SummaryThe term self-driving car means “a car which is able to drive itself without any human operator”. The self-driving car is a car that is driven on a pre-defined path with the help of sensors. It has been developed to support human beings and reduce road accidents caused by human error over the past decades. These cars are widely predicted to reduce road congestion through higher Degree of efficiency, improve road safety by eliminating human error, and free drivers from the burden of driving, allowing greater productivity and time for rest, along with a numerous of other envisioned benefits. Uber, Tesla, Google, Toyota are some of the companies developing and testing these autonomous vehicles for 20 years. We are proposing a prototype model of a self-driving car that is implemented on a small size electric baby car. The sensors we are using to extract data from four types of sensors that are; LIDAR, Camera, Ultrasonic sensors, and GPS. LIDAR for 3-D mapping of the surrounding region, Camera for vision, Ultrasonic sensor for obstacle avoidance and detection, and GPS for navigation along with positioning. Further approaches to a self-driving car are perception, planning, and control. Possibly for a long time to come, the full driving task is a too complex activity to be fully formalized as a sensing-acting robotic system that can be explicitly solved through model-based and learning based approaches in order to achieve full unconstrained vehicle autonomy.
Project ObjectivesLane Detection
In a lane detection system, the car will detect and identifies the lanes on a road. It uses the principle of Hough transform and Canny edge detector to detect lane lines from real-time camera images fed from the front-end camera of the automobile.
Traffic-sign detection and their classification
Traffic signs help keep traffic in order and are also work in an organization to reduce the number of traffic accidents on the road network. Therefore, recognizing traffic signs and following through is a critical task for a self-driving vehicle.
Traffic-sign detection is by which a car is able to recognize the presence of traffic signs and their classification is which enables it to read the instruction of the traffic sign e.g. "speed limit" or "turn ahead" and act accordingly with an interactive planning and control mechanism.
Obstacle detection and avoidance
A self-driving car needs to be able to detect and avoid obstacles of impending collision; other cars, vehicles, cyclists and pedestrians.
Though most modern cars have collision avoidance or warning systems integrated into them an automobile safety feature designed to prevent or reduce severity of a collision by providing a warning to the driver and when the collision is about to happen, action is taken without any driver intervention, these systems achieve this without considering a goal for the decision of how to execute an avoidance maneuver but this work involves dynamic path planning decisions for obstacle avoidance maneuver.
Object Detection and Tracking
Humans can readily detect and identify objects present in their field of vision. The human visual system is quick and efficient enough to perform complex tasks like identifying multiple objects with little conscious thought.
Real-time video object detection is a key component in self-driving systems and detecting objects in a video stream frame-by-frame is critical when milliseconds are at stake to avoid a collision.
Object detection means that multiple objects in a video frame are identified in the image, classified, and their location is also determined. For simplicity, this project involves only detecting vehicles and pedestrians.
Navigation
Car navigation allows an automobile driver to see where they are on a map at any given time. Automobile navigation can rely on GPS (Global Positioning System). GPS-based navigation uses radio signals from special satellites that send their position and the time of transmission.
Our system hardware consists of four types of sensors, first one LIDAR, which is an optical remote sensing technology. It is used to measure the distance of target with illumination to light in the form of a pulsed laser. Moreover, we are using sensor fusion technique to make use of each sensor. In the AV environment perception, the LIDAR is able to produce 3D measurements but offer little information. Therefore, in order to achieve the correct and rich appearance of data for the object we are using camera also. Along with these two, we are also using the ultrasonic sensor as a secondary sensor for detection and avoidance of objects. The GPS module is also used for navigating the trajectory path from GPS satellite and to make sure the car aware itself exactly where it is.
The Software Implementation of our system is divided into three parts:
1. Perception
Perception refers to the ability of an autonomous system to collect information and extract relevant knowledge from the environment or sensor data.
The critical elements of perception include in our system
1. Lane Detection
2. Traffic Light
3. Traffic Sign Detection
4. Object Detection & Classification
2. Planning
After the elements of perception are done the next step is planning. Planning includes
1. Behavioral Planning: The behavioral planning is responsible for decision making to ensure the vehicle follows any road rules and interact with other objects in a safe and conventional manner while making the progress in prescribed route
2. Motion Planning: In the context of mobile robotics, motion planning refers to the process of deciding on a sequence of action to reach a specified goal typically while avoiding collision with obstacles.
3. Trajectory Planning: The trajectory planner is responsible for deciding the best route from all of the routes available to reach the destination.
3. The Control
Control refers to the execution competency of an autonomous system also referred to as motion control which is the process of converting intentions into actions in one system. Control is divided into 4 types of control
a. Acceleration control
b. Steering control
c. Brakes control
• The overwhelming benefit is safety. Over 1.3 million people are killed annually on the roads around the world with several million seriously injured. According to the US National Highway Traffic Safety Administration, alcohol abuse, speeding, and driver distraction are the cause of the vast majority of these accidents. But autonomous vehicles will never be susceptible to any of these failings. It has been estimated that driverless cars will save over a million lives each year.
• A passenger can spend time doing other things in a driverless car. There will also be fewer of them on the roads. Therefore, less time will be spent in traffic jams. They will all be connected online, meaning that they can communicate with each other and coordinate intentions.
• Driverless cars will use electric power because they will be harder to operate with fossil fuels. This would also mean a move away from fossil fuel transport to a less polluting form of energy with fewer carbon emissions – reducing the effect on global warming.
• Can carry much larger payloads as the mechanical actuators and controls required for controlling will be replaced by electronic actuators.
• Provides a source of independent transportation to kids, elderly and disables.
• Has a room of innovation. Cars can be completely redesigned according to human safety and ease of travelling, disposing off the traditional vehicle designs.
The system we are designing will be small scale, prototyped on small scale RC baby car consist of minimum hardware resources and will be only able to perform limited driving tasks such as obstacle avoidance, lane detection, traffic sign, and lights detection, and navigation. The system primary sensors would be Lidar and Camera. LIDAR would be responsible for 3D mapping the surrounding region would high precision and will be used for obstacle avoidance and also track objects around the car. The camera would be responsible for system visualization it will perform certain task like lane detection, traffic light, and sign detection, obstacle avoidance and also object tracking. The first two tasks that are lane detection and traffic sign and light detection will be entirely based on camera whereas the other two tasks, obstacle avoidance, and object tracking would have their data coupled along with camera and LIDAR with a technique called sensor fusion. Along with these sensors, ultrasonic sensors will also be installed for better obstacle avoidance and added safety from incoming short distanced obstacles. For navigation and route planning, the GPS sensor will be used, loaded with Google Maps API, it will be responsible for localization and navigation of car. As far as the software part is concerned our system’s operating system will be based on python and a little bit flavor of C++. This operating system will be responsible for data acquisition, data management, data extraction, applying algorithms on extracted data, and perform necessary control operations according to it. Throughout all this, a small TFT LCD will be responsible for taking the input from a user and display the state and the next move the car is going to perform.
Final Deliverable of the Project HW/SW integrated systemType of Industry IT Technologies Artificial Intelligence(AI), Internet of Things (IoT), RoboticsSustainable Development Goals Decent Work and Economic Growth, Industry, Innovation and InfrastructureRequired Resources| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Total in (Rs) | 64510 | |||
| RC Baby Car | Equipment | 1 | 27500 | 27500 |
| 5 MP Pi Cam | Equipment | 1 | 2480 | 2480 |
| LCD3.5 tft | Equipment | 1 | 2800 | 2800 |
| EAI YDLIDAR X4 Lidar | Equipment | 1 | 14000 | 14000 |
| Arduino UNO | Equipment | 1 | 800 | 800 |
| Ultrasonic Sensor HC-SR04 | Equipment | 3 | 150 | 450 |
| Raspberry Pi 3B+ | Equipment | 1 | 5450 | 5450 |
| DC-DC Buck Converter | Equipment | 1 | 320 | 320 |
| Custom Duty | Miscellaneous | 1 | 6960 | 6960 |
| SIM 808 module GSM GPRS GPS | Equipment | 1 | 2800 | 2800 |
| Wires & Connectors | Miscellaneous | 1 | 950 | 950 |