Adil Khan 10 months ago
AdiKhanOfficial #FYP Ideas

OBJECT DETECTION, CLASSIFICATION AND TRACKING FOR AUTONOMOUS VEHICLE

The autonomous vehicle is a self-driving car and it has some advantages like Traffic Jams Lessens, Parking free of stress, Vehicle time saving, etc. There are 5 levels of automation (level 0 to level 5). Autonomous vehicles basically based on three things Localization, Mapping, and Tracking objects.

Project Title

OBJECT DETECTION, CLASSIFICATION AND TRACKING FOR AUTONOMOUS VEHICLE

Project Area of Specialization

Artificial Intelligence

Project Summary

The autonomous vehicle is a self-driving car and it has some advantages like Traffic Jams Lessens, Parking free of stress, Vehicle time saving, etc. There are 5 levels of automation (level 0 to level 5). Autonomous vehicles basically based on three things Localization, Mapping, and Tracking objects. In this report, the information from the images of PI Camera, laser sensors, LiDAR, GPS/INS, BMS, LiPo Battery, Bug converter, 5amp, LCD Screen (HDMI), Arduino Nano is discussed in detail. LiDAR is used for environment and measuring the distance by illuminating the subject with laser light and measuring the time required to reflect light and return to the sensor. GPS is used to provide real-time geolocation. The Pi Camera v2 is an excellent 8-megapixel Sony IMX219 picture sensor hand crafted add-ready for Raspberry Pi, highlighting a proper center focal point. High conductivity semisolid polymers structure this electrolyte.
The methodology of how the project/autonomous vehicle work is discussed. The methodology consists of step by step system of AV. ­The economic and technical feasibility were also checked. Economic feasibility is all about the cost of components that will use in this project and technical feasibility is about whether those components are compatible or will work for this project. The flowchart and block diagrams were also provided to clarify the purpose and work of this project.

Project Objectives

For operating the system of AV cars, it will be categorized into three major objectives.

  • Object detection (using cameras): The AV will be installed with cameras around the vehicle which will be capturing images based on the concept of the Viola-Jones algorithm with the Harr Cascade Frontal Face Detection with XML Extension and a database will be created where similar images of known objects will be placed so that the images captured by AV during driving can be compared with the database images and the vehicle will be able to detect what object is in its path and based on the detection of the object the AV can make the decision for further action. 
  • Distance measurement (using LiDAR): LiDAR sensor scans the surrounding of the AV so that the LiDAR with the help of GPS can help the AV vehicle to design a map of its surrounding with accurate distances between the vehicle and the objects. The range of the LiDAR will be around 40-50 meters so anything within this range can be detected by the LiDAR and the data can be used to generate Real-time maps for AV to navigate.
  • Position localization/ mapping (using GPS): GPS of the autonomous vehicle AV will be responsible for determining the real-time positioning of the vehicle with cm’s level of accuracy and also to determine the heading of the vehicle as well which will help it to navigate easily. Another function of the GPS will be that the raw data collected from LiDAR will be integrated with GPS data and using a microcontroller a real-time map will be generated showing the exact positioning of objects with actual distance. We will be working on a Global coordinate system initially but to be more accurate we might use local coordinates as well.

The Raspberry Pi is going to be the brain of the AV, all the raw data collected using cameras, LiDAR, and GPS will be synchronized and integrated to generate a complete environment for the AV which will help the vehicle to navigate smoothly make its own decisions based on road and traffic conditions and traffic signs and signals either to stop to maintain speed, make turn, etc.

Using all this data from the sensors and designing a program that will combine the data using Raspberry Pi will generate the best results for the car to smoothly navigate on roads avoiding collisions or accidents.

A panaflex of a circling road with a length of 11 feet and a width of 5.5 feet was printed. Using an Arduino Nano board with LEDs mounted on it for traffic signals which will be placed on the road. The neural network was coded on the basis of the Viola-Jones algorithm with Harr Cascade Frontal Face Detection but we coded it for detecting traffic signals, path changes, and object detection. The neural network for the AV was trained by driving the car around the track thousands of times so it can drive itself in the final version.

Project Implementation Method

To start off, all the components are first mounted onto the RC Car chassis to ensure all the components fit perfectly and there is no space issue and nothing results in a problem when the car starts driving itself. After the components are installed, the coding for each component begins. This includes the Raspberry Pi which is essentially the brain of the whole car, the GPS system, LiDAR, camera, and driving motors.

The coding is divided into 3 parts Data Collection, Training, and Implementation.

Data Collection is the stage where all the data is collected from all components which can be used to train the model of the AV

The training stage essentially requires training the whole AV and the neural network to run fully automated and without the help of the driver.

The implementation stage utilizes all the training and the data to run the AV fully automated.

After the AV is assembled and all the necessary programs are coded into the Raspberry Pi, the training of the neural network begins. To train the neural network, a joystick is used to drive the car around the track thousands of times, at the same speed and following the same path. This allows the network to build a database that will be used when the car will drive fully automated. This training is crucial and time-consuming due to the fact that you have to follow the same path at the same speed. Any mistakes can result in the whole run being scrapped and starting a new run.

Benefits of the Project

  • Greatly improved safety: 94% of accidents are caused by human error
  • Improved transport interconnectivity
  • Reduced congestion: Congestion will cost NSW $6.9 billion in 2017
  • Reduced pollution and emissions: Reducing transport energy consumption by up to 90% 
  • Greater mobility options: For elderly, young, and disabled users
  • Greater convenience, efficiency, and reliability
  • Reduced costs and maintenance requirements

Technical Details of Final Deliverable

The technical details of the project are as follows:

1) The circuit is made through integrated circuits and ICs, the main controlling brain is the Arduino mega.

2) There are mainly three boards that are integrated together, The boards are as follows, The control board, the battery management board, and the board which has the relays and switching.

3) The inputs are taken from the environment as well as the used parameters in the circuits as well through sensors.

4) The system provides the framing of the panels which can move according to the intensity of the sun.

Final Deliverable of the Project

HW/SW integrated system

Core Industry

Transportation

Other Industries

IT

Core Technology

Artificial Intelligence(AI)

Other Technologies

Robotics

Sustainable Development Goals

Quality Education

Required Resources

Item Name Type No. of Units Per Unit Cost (in Rs) Total (in Rs)
Current sensor Equipment104504500
Buck Convertor Equipment92502250
Voltage Regulator Equipment71501050
LCDs Equipment67504500
ZMPT Equipment58004000
Arduino Mega Equipment222004400
LDRs Equipment111801980
Motor Driver Equipment57503750
Wi-Fi Module Equipment52001000
TRF mini LIDAR Equipment326007800
IR sensors Equipment58504250
Frame Equipment148004800
Motor Equipment4280011200
Arduino Equipment122002200
other Miscellaneous 11000010000
Total in (Rs) 67680
If you need this project, please contact me on contact@adikhanofficial.com
LECTURE ATTENDEE MONITORING SYSTEM FOR ONLINE EDUCATION

Using the Lecture Attending and Monitoring System, teachers can be able to keep a record o...

1675638330.png
Adil Khan
10 months ago
IOT Based Voice Controlled Humanoid Robot

  Our project aim is to develop a robot which can recognize the voice and perform...

1675638330.png
Adil Khan
10 months ago
To investigate the effects of acid and bleach washing processes on str...

Denim washing is the most critical finishing stage performed on garment which is carried o...

1675638330.png
Adil Khan
10 months ago
VOICE AND IMAGE FEEDBACK CONTROL ROBOTIC ARM

Robotics always remains a major interesting field in the scientific community. Robots are...

1675638330.png
Adil Khan
10 months ago
Virtual Apparel Try-on for web stores using Machine Learning

Ecommerce has evolved over the years and revolutionized retail. Online shopping is an ever...

1675638330.png
Adil Khan
10 months ago