Autonomous vehicles (AVs) will be ubiquitous in our future city streets. Proponents of AVs advocate for their accelerated adoption, noting potential benefits such as reducing pollution and improving road safety. Advanced AV technologies promise to be more reliable than the average
Autonomous vehicle
Autonomous vehicles (AVs) will be ubiquitous in our future city streets. Proponents of AVs advocate for their accelerated adoption, noting potential benefits such as reducing pollution and improving road safety. Advanced AV technologies promise to be more reliable than the average human driver and may eliminate up to 90% of car crashes. To this end, the U.S. Department of Transportation and the National Highway Traffic Safety Administration (NHTSA) have published federal guidance for autonomous vehicles. The guidance encourages states to begin testing them on local streets, and states such as Massachusetts have already begun doing so in pilot programs. However, the guidance provided is general and underspecified. It leaves open many questions that will need more specific answers before AVs can drive beyond the current phase of limited tests and into public use. Many of these questions are regulatory. Others are ethical. All are of concern to both government officials, automobile manufacturers, and any member of the public who uses the streets. This project proposes an approach to resolving open regulatory and ethical questions while also leveraging the potential of AV technology to reframe the priorities and morality of our streets.
The introduction of AV technology presents the opportunity to upgrade how the system of streets serves the public. If AV driving algorithms are designed to prioritize higher occupancy vehicles, bikes, and pedestrians, we can expect the increased use of shared transit and a gradual shift to a system of streets that are safer and more equitable.
Autonomous vehicles are here, and they’re here to stay. While their use and acceptance are not yet widespread, that day is coming. Most of the major automotive manufacturers are actively exploring autonomous-vehicle programs and conducting extensive on-road testing.
Increased safety is the primary benefit. “Right off the bat, the main goal is to reduce the number of accidents, “Many cars that have collision-avoidance technology today are demonstrating that they are safer than cars that don’t.”
Phase-I:
We will set up all the Computer Vision and Deep Learning software needed. The main software tools we are using are Python (the de-facto programming language for Machine Learning/AI tasks), OpenCV (a powerful computer vision package) and Tensor flow (Google’s popular deep learning framework).
Phase-II
With the hardware and software setup out of the way, we will dive right into the next parts! Our first project is to use python and OpenCV to teach our car to navigate autonomously on a winding single lane road by detecting lane lines and steer accordingly.
Phase-III
We will train our car to navigate the lane autonomously without having to explicitly write logic to control it. This is achieved by using “behavior cloning”, where we use just the videos of the road and the correct steering angles for each video frame to train our car to drive itself.
Phase-IV
We will use deep learning techniques such as single shot multi-box object detection and transfer learning to teach our car to detect various (miniature) traffic signs and pedestrians on the road.
Increased safety. Automated reactions, fewer accidents. ...
Greater efficiency. Reduced congestion. ...
Less energy consumption. Greater efficiencies lead to more energy savings for your fleet. ...
More productivity. The power of multi-tasking
We have empirically demonstrated that CNNs are able to learn the entire task of lane and road following without manual decomposition into road or lane marking detection, semantic abstraction, path planning, and control. A small amount of training data from less than a hundred hours of driving was sufficient to train the car to operate in diverse conditions, on highways, local and residential roads in sunny, cloudy, and rainy conditions. The CNN is able to learn meaningful road features from a very sparse training signal (steering alone). The system learns for example to detect the outline of a road without the need of explicit labels during training. More work is needed to improve the robustness of the network, to find methods to verify the robustness, and to improve visualization of the network-internal processing steps.
| Item Name | Type | No. of Units | Per Unit Cost (in Rs) | Total (in Rs) |
|---|---|---|---|---|
| Toy Car | Equipment | 1 | 20000 | 20000 |
| Raspberry PI 4 Model B | Equipment | 1 | 18000 | 18000 |
| Raspberry Pi Camera | Equipment | 1 | 6000 | 6000 |
| TFmini LiDAR | Equipment | 1 | 15000 | 15000 |
| L293d Motor Driver Module | Equipment | 1 | 800 | 800 |
| lipo battery | Equipment | 1 | 2500 | 2500 |
| Wires, Glue Gun, Other tools | Miscellaneous | 1 | 5000 | 5000 |
| different sensors | Equipment | 1 | 5000 | 5000 |
| Total in (Rs) | 72300 |
Now a days the tools that are used to monitor railway tracks specially at crossings a...
It is observed that Paper based manual attendance is not very helpful for taking attendanc...
This system using IoT technology have performed real-time monitoring to detect different t...
In the ocean, plastic debris injures and kills fish, seabirds and marine ma...
This Project addresses an approach to intelligent safety control of passengers on escalato...