How Autonomous Vehicles Perceive and Navigate their Surroundings

How Autonomous Vehicles Perceive and Navigate their Surroundings

Situational awareness is the key to good driving. To navigate cars to the desired destination, drivers need to know their locations and observe their surroundings in real time. These observations allow the driver to take actions instinctively such as accelerate or brake, change lanes, merge onto the highway, and maneuver around obstacles and objects.

Fully autonomous vehicles (AVs) work in much the same way, except they use sensor and GPS technologies to perceive the environment and plan a path to the desired destination. These technologies work together to establish the location of the car and the correct route to take. They continuously determine what is going on around the car, locating the position of people and objects in close proximity to the vehicle, and assessing the speed and direction of their movements.

The constant flow of information into the car’s onboard computer system decides the safest way to navigate safely within its surroundings. To better understand how sensor technologies in autonomous cars work, let’s examine how these vehicles perceive their location and environment to identify and avoid objects in their pathways.

Precisely Measuring the Vehicle’s Location and Surroundings

Sensor technologies provide information about the surrounding environment to the vehicle’s computer system, allowing the car to move safely in our three-dimensional world. These sensors gather data that describe a car’s changes in position and orientation.

Autonomous vehicles utilize high-definition maps that guide the car’s navigation system. Recent developments in AV technology aim to generate and update these maps in real-time. While this is still a work in progress, it is necessary because the conditions of our roadways are not static. Congestion, accidents, and construction complicate real-life movement on our streets and highways. On-vehicle sensing technologies, such as lidar, cameras, and radar, perceive the environment in real-time to provide accurate data of these ever-changing roadway situations.

The real-time maps that these sensors produce are often highly detailed, including road lanes, pavement edges, shoulders, dividers, and other critical information. These maps include additional information, such as the locations of street lights, utility poles, and traffic signs. The vehicle must be aware of each of these features to navigate the roadway safely.

"Due to the limitations in camera technology, we are yet to achieve complete autonomy or “self-driving” capability in the real world"

Detecting and Avoiding Objects

Sensor technologies provide onboard computers with the data they need to detect and identify objects such as vehicles, bicyclists, animals, and pedestrians. This data also allows the vehicle’s computer to measure these objects’ locations, speeds, and trajectories.

An example of object detection and avoidance in autonomous vehicle testing is a dangerous tire fragment on the freeway. Tire fragments are not usually large enough to spot easily from a long distance, and they are often the same color as the road surface. AV sensor technology must have high enough resolution to detect the fragment’s location on the roadway accurately. This requires distinguishing the tire from the asphalt and determining that it is a stationary object (rather than something like a small, moving animal).

In this situation, the vehicle not only needs to detect the object but also classify it as a tire fragment which must be avoided. Then the car must determine the right course of action, such as to change lanes to avoid the tire fragment and other incoming vehicle or object. For the car to have enough time to change its path and speed, these steps must all happen in less than a second. Again, these decisions made by the vehicle’s onboard computer depend on accurate data provided by the vehicle’s sensors.

A Closer Look at Sensor Technologies

To be categorized as “fully autonomous,” a car must be able to navigate between destinations without any intervention from a human driver. Self-driving cars aim to increase safety by eliminating human errors from driving situations, such as cell phone distractions or drowsy inattention.

Sensor technologies perceive a car’s environment and provide the onboard map with information about current roadway conditions. To build redundancy into self-driving systems, automakers utilize an array of sensors, including cameras, radar, and lidar.

Camera-centric sensor suites can monitor the environment and enable limited driving automation. Cameras can identify colors and fonts, so they are capable of reading traffic signals, road signs, and lane markings. However, images produced by cameras – even stereo cams – do not always provide the level of accurate depth perception necessary for full autonomy. Due to the limitations in camera technology, we are yet to achieve complete autonomy or “self-driving” capability in the real world. Radar systems complement cameras very well. They typically offer better range and horizontal field of view. Radars are unhampered by inclement weather or lack of light.

Additionally, radars provide accurate information on speeds of other vehicles. That said, radars have poor resolution (>10+ cm), so a radar’s 3D image is unacceptably fuzzy. Radars also have difficulty detecting stationary objects. Consequently for accurate object detection and classification radars cannot be used without combining them with cameras.

Lidar provides high-resolution, three-dimensional information about the surrounding environment. Unlike radar, lidar offers much higher resolution computer perception data, enabling accurate object detection. Unlike cameras, lidar provides accurate depth perception, with distance accuracy of a few centimeters, making it possible to precisely localize the position of the vehicle on the road and detect available free-space for the car to navigate. Lidars also offers 360 degrees horizontal field of view and up to 40 degrees vertical field of view, providing the vehicle the ability to generate dense, high-resolution 3D maps of the environment.

Autonomous vehicles depend on the data provided by their sensors to perceive and navigate the environment. AVs will be equipped with lidar, cameras, and radar to enable safe, reliable full autonomy.

Read Also

How Autonomous Vehicles Perceive and Navigate their Surroundings

How Autonomous Vehicles Perceive and Navigate their Surroundings

Anand Gopalan, CTO and Sally Frykman, Director of Communications, Velodyne LiDAR, Inc.
Pcb Assembly; Challenges, Solutions, and Future

Pcb Assembly; Challenges, Solutions, and Future

Mulugeta Abtew, VP, Manufacturing Technology Development, Sanmina
Cloud Dataverse: A Data Repository Platform For the Cloud

Cloud Dataverse: A Data Repository Platform For the Cloud

Mercè Crosas, Ph.D., Chief Data Science & Technology Officer, Harvard University
Artificial Intelligence Explained.... With Cats and Dogs...

Artificial Intelligence Explained.... With Cats and Dogs...

Richard Bradley, Director, Digital Supply Networks at Deloitte
How Technology will Continue to Disrupt the Aviation Industry

How Technology will Continue to Disrupt the Aviation Industry

Chris Rogerson, Vice President, Inmarsat Aviation
Making Sense of Environmentally-Aware Robots

Making Sense of Environmentally-Aware Robots

John Dulchinos, VP of Digital Manufacturing and Automation, Jabil