News
Artificial Intelligence and Machine Learning News

Still Waiting for Self-Driving Cars

Why is it taking so long for autonomous vehicles to hit the road?
Posted
  1. Introduction
  2. The State of Self-Driving Cars Today
  3. Technological Hurdles
  4. Potential Solutions
  5. Training AV Systems to Understand Human Behavior
  6. Regulatory Challenges
  7. Author
moving car in a background of streaking lights

Over the past decade, technology and automotive pundits have predicted the “imminent” arrival of fully autonomous vehicles that can drive on public roads without any active monitoring or input from a human driver. Elon Musk has predicted his company Tesla would deliver fully autonomous vehicles by the end of 2021, but he made similar predictions in 2020, 2019, and 2017. Each prediction has fallen flat, largely due to real-world safety concerns, particularly related to how self-driving cars perform in adverse conditions or situations.

Back to Top

The State of Self-Driving Cars Today

Despite such proclamations from Tesla, which released its optimistically named Full Self Driving capability for AutoPilot in October 2021, fully automated self-driving cars have not yet arrived. Instead, most manufacturers are offering systems that feature capabilities that generally fall within the first three of the six levels of autonomy defined by the Society of Automotive Engineering (SAE), which range from Level 0 (no driving automation) to Level 5 (full self-driving capabilities under all conditions).

Most new cars today feature some Level 1 driver assistance technology, including automatic braking, lane-keeping assist, and adaptive cruise control. More advanced systems like Tesla’s Autopilot or General Motors’ Super Cruise, fall into the Level 2 classification indicating the car can autonomously manage its speed and steering, but requires the driver to remain focused and able to take control in the event of an adverse event. Other manufacturers, such as Honda and Audi, are focusing on Level 3 autonomous systems that allow the car to take complete control, but only under very specific conditions, such as low-speed driving in traffic, in good weather, and only on preapproved roads.

“I’m sure a lot of manufacturers would like to spring over Level 2 and Level 3, to the greatest extent possible,” says Peter Hancock, a Pegasus Professor and Provost Distinguished Research Professor in the Department of Psychology and the Institute for Simulation and Training at the University of Central Florida.

However, the most likely scenario under which Level 4-capable vehicles will arrive may be through the deployment of autonomous long-haul trucks. Hancock says that due to global truck driver shortages, there likely will be a stronger push to develop autonomous trucks, at least across interstate highways in the U.S., which are constructed to a specific design standard, are usually in good physical condition, and feature physical barriers between oncoming traffic.

Indeed, Aurora Innovation announced it is building a Level 4 autonomous system and plans to launch its autonomous trucking business in 2023, followed by our autonomous ride-hailing business in 2024. The company told Communications it has partnered with FedEx, Uber, Toyota, and truck OEMs (Volvo and PACCAR) to develop a partnership ecosystem focused on bringing self-driving technology to market.

Back to Top

Technological Hurdles

That said, widespread adoption of autonomous driving is still years away from becoming a reality, largely due to the challenges involved with the development of accurate sensors and cameras, as well as the refinement of algorithms that act upon the data captured by these sensors. Autonomous vehicle cameras and sensors that can detect the physical world and the various objects that a driverless car is likely to encounter. These can include various objects, such as road signs, traffic signals, and other vehicles or pedestrians, or specific road features such as lane markings, potholes, or debris or substances (such as blown-out truck tires, ice, or puddles).

Most systems take a bottom-up approach to training a vehicle navigation system, where navigation systems are trained on the identification of these specific objects or conditions. However, this process is extremely data-intensive, given the large variety of potential objects that could be encountered, as well as the near-infinite ways objects can move or react to stimuli (for example, road signs may not be accurately identified due to lighting conditions, glare, or shadows, and animals and people do not all respond the same way when a car is hurtling toward them).

This data is fed into artificial intelligence (AI) training algorithms that are designed to help the vehicle interpret those objects and actions so the vehicle can safely adjust its speed, position, and articulation, even on roads the vehicle has yet to travel, or in the presence of objects it has never before encountered. However, algorithms in use still have difficulty identifying objects in real-world scenarios; in one accident involving a Tesla Model X, the vehicle’s sensing cameras failed to identify a truck’s white side against a brightly lit sky.

Back to Top

Potential Solutions

Many autonomous vehicle incidents involve “edge cases”—situations systems have not been specifically trained to handle. Examples can include pedestrian or animal interactions, and situations involving aggressive drivers or drivers that purposefully defy traffic laws and conventions, such as motorcycles that engage in lane-splitting (driving between lanes to avoid traffic congestion). Researchers are now looking at how high-definition (HD) mapping systems, which are far more precise than GPS, or communications technology that lets vehicles talk to highway infrastructure or each other, could help autonomous vehicles maintain their sense of place in these edge-case situations.

However, given the latency issues of communications networks, which refers to the time it takes to send a data signal from the vehicle on a round-trip voyage through other communications infrastructure and back to the vehicle, split-second decisions may not be able to be handled via vehicle-to-everything (V2x)-based communications, which includes V2I (vehicle-to-infrastructure), V2N (vehicle-to-network), V2V (vehicle-to-vehicle), V2P (vehicle-to-pedestrian), V2D (vehicle-to-device) and V2G (vehicle-to-grid) communications.

The approach taken by the self-driving development teams at Audi, Honda, Toyota, Volvo, and Aurora Innovation is to incorporate light detection and ranging technology, commonly referred to as LiDAR. Aurora says it has designed a proprietary sensor, FirstLight Lidar, which uses Frequency Modulated Continuous Wave (FMCW) LiDAR to see up to a quarter of a mile (about 400 meters) ahead, and also instantaneously measure the velocity of the actors around the vehicle. Aurora says the use of this technology creates more time for an autonomous system to apply the brakes or maneuver safely, particularly for heavy autonomous trucks.


Many autonomous vehicle incidents involve “edge cases”—situations systems have not been trained to handle, including those involving aggressive drivers.


Meanwhile, autonomous driving startup Waymo is focused on providing ride-hailing services through its Waymo One brand in the East Valley region of Phoenix, AZ. Although it declined an interview request, the company notes that its Waymo Driver autonomous driving technology is operating largely under Level 4 autonomous driving guidelines, mapping the territory in a granular fashion (including lane markers, traffic signs and lights, curbs, and crosswalks). The solution incorporates both GPS signals and real-time sensor data to always determine the vehicle’s exact location. Further, the system relies on more than 20 million miles of real-world driving and more than 20 billion miles in simulation, to allow the Waymo Driver to anticipate what other road users, pedestrians, or other objects might do.

A potential intermediate solution currently being tested in Germany is to utilize remote drivers to control vehicles. Vay, a Berlin-based startup, has been testing a fleet of remote-controlled electric vehicles in that city and plans to roll out a mobility service in Europe and potentially the U.S. this year. The service will allow customers to order a remote-controlled car and have it drive them to their desired destination; on arrival, they get out if the vehicle and leave it to a human teledriver miles away to either park the vehicle or steer it to the next client. The company claims its system is engineered to meet the latest automotive safety and cybersecurity standards, and deploys redundancies in its hardware components and its cellular network connectivity.

Industry watchers are not convinced such remote operation of vehicles is safe or practical. “Latency and connectivity are a big problem, though maybe with some new technologies or more advanced communication technologies, it could improve,” says Gokul Krithivasan, global engineering manager for Autonomy and Functional Safety at technical and management consulting firm kVA by UL, which is involved with autonomous vehicle safety and training, and the development of relevant safety standards. While not specifically commenting on Vay’s model or approach, Krithivasan says emergency situations faced by drivers often require decisions to be made in milliseconds, and any latency issues due to network congestion may make it difficult for a fully remote driver to respond in an emergency. “In typical implementations for SAE Level 4 autonomous applications, remote operators are not expected to control the vehicle continuously, but are tasked with engaging or triggering an appropriate minimal risk maneuver that’s already equipped in the autonomous control logic,” Krithivasan explains.

Back to Top

Training AV Systems to Understand Human Behavior

However, to allow self-driving systems to function safely in all driving scenarios, significant work still must be completed around algorithm development and testing to ensure vehicle navigation systems understand the complex social interactions that often occur between oncoming and adjacent drivers, or drivers and pedestrians. Generally, if a pedestrian is about to cross or is crossing a street, the driver and pedestrian will make eye contact, and will use nonverbal cues to indicate the direction and speed of their movement. Similarly, the lack of such eye contact will signal to the driver that the pedestrian, or other driver, is unaware of their presence, and he/she should take evasive action to avoid or mitigate a collision.

Hancock says training a system to recognize these cues, or the lack thereof, can be done, but requires massive amounts of compute power and training time, and will take years to develop a reliable and trustworthy system. “A big area [relevant to this topic] is something called perceptual affordances, which varies greatly between the human and the automation,” Hancock says. “We usually are pretty good at understanding other human beings. So, when we see an accident with a human being driver, we sort of look at it from human eyes and go, ‘Yeah, I can understand how that happened.’ But then we look at automated accidents and go, ‘Well, that’s ridiculous—I’ve got no idea how the heck that [car] made that mistake.”

Normally, human drivers accumulate enough experience over time to safely handle situations in which other drivers make irrational or unexpected decisions, often by slowing down, pulling over, or simply maintaining their speed and direction of travel so that the human, animal, or other vehicle can navigate around them.


Hancock says it will take years and massive compute power to train self-driving systems to understand the nonverbal cues that pass between drivers and pedestrians.


“The key remaining challenge is that current AV algorithms don’t have a sophisticated-enough implicit understanding of human behavior to handle interactions in traffic [effectively],” says Gustav Markkula, Chair in Applied Behavior Modelling at the U.K.’s University of Leeds. “Human road users have that implicit understanding—in the sense that we sufficiently understand, consciously or subconsciously—a lot of the latter I would say, what others are up to on the road and how to behave ourselves to achieve safe and efficient interaction outcomes most of the time.”

Back to Top

Regulatory Challenges

Perhaps the greatest hurdles to commercializing fully autonomous vehicles are ethical and liability concerns, including that of which party bears the fault if a self-driving car kills or injures someone, or destroys property. For years, the U.S. government declined to regulate driver-assist systems such as Tesla’s Autopilot and GM’s SuperCruise.

The tide may be changing, as in June 2021, the U.S. government said that all automakers must report crashes involving driver-assist systems. Further, the National Highway Transportation Safety Authority (NHTSA) in August 2021 launched an investigation into Teslas using Autopilot that have rear-ended emergency vehicles, crashes which have been responsible for 17 injuries and one death. Furthermore, in October 2021, President Joe Biden’s administration appointed Missy Cummings, a Duke University engineering professor who studies autonomous systems, as a senior advisor for safety with the NHTSA. Cummings has been critical of Tesla and the federal government’s handling of driver-assist systems like Autopilot.

Though Cummings’ appointment is unlikely to spur any immediate rule-making, the NHTSA’s five-year-old guidance articulates the agency’s authority to intervene if autonomous driving systems show evidence of “predictable abuse,” which is often illustrated by You-Tube videos of drivers asleep, playing games, or engaging in other activities that deflect the drivers’ attention while in the driver’s seat, despite the warnings in Tesla’s manual.

Ultimately, fully autonomous Level 5 driving systems are likely a decade or more away, at least in terms of being deployed in privately owned and operated vehicles. The combination of technical issues, regulatory questions, and the ongoing microchip shortage serve as clear hurdles to adoption of fully autonomous systems. Full autonomy is likely going to be deployed first on commercial vehicles, including autonomous trucks, ride-hailing services, and shuttles. In addition to having the necessary capital to fund the purchase of these vehicles, commercial implementations are more likely to be able to restrict operation to specific, known roads, as well as establishing and enforcing company-specific safe operating parameters for autonomous vehicles, such as using cameras to ensure drivers are actively paying attention to the road.

*  Further Reading

SAE’s J3016 Levels of Driving Automation, May 3, 2021, https://www.sae.org/blog/sae-j3016-update

Hancock, P.A.,
Driving Into the Future, Frontiers in Psychology, 18 September 2020 | https://doi.org/10.3389/fpsyg.2020.574097

NHTSA Enforcement Guidance Bulletin 2016-02: Safety-Related Defects and Automated Safety Technologies, Federal Register, 23 September 2016, https://bit.ly/3bcGLyl

We Were Promised Self-Driving Cars. Where Are They? | The Couch Report, VICE News, 13 September 2021, https://www.youtube.com/watch?v=g21zQhr95FQ

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More