Opinion
Computing Applications Law and technology

Is the Law Ready for Driverless Cars?

Yes, with one big exception.
Posted
  1. Article
  2. References
  3. Author
  4. Footnotes
Is the Law Ready for Driverless Cars? illustrative photo

I am a law professor who teaches torts and has been studying driverless cars for almost a decade. Despite the headlines, I am reasonably convinced U.S. common law is going to adapt to driverless cars just fine. The courts have seen hundreds of years of new technology, including robots. American judges have had to decide, for example, whether a salvage operation exercises exclusive possession over a shipwreck by visiting it with a robot submarine (it does) and whether a robot copy of a person can violate their rights of publicity (it can). Assigning liability in the event of a driverless car crash is not, in the run of things, all that tall an order.

There is, however, one truly baffling question courts will have to confront when it comes to driverless cars—and autonomous systems in general: What to do about genuinely unforeseeable categories of harm?

Imagine a time when driverless cars are wildly popular. They are safer than vehicles with human drivers and their occupants can watch movies or catch up on email while traveling in the vehicle. Notwithstanding some handwringing by pundits and the legal academy, courts have little trouble sorting out who is liable for the occasional driverless car crash. When someone creates a product that is supposed to move people around safely and instead crashes, judges assign liability to whoever built the vehicle or vehicles involved in the accident.

There are some difficult cases on the horizon. Policymakers will have to determine just how much safer driverless cars will need to be compared to human-operated cars before they are allowed—or even mandated—on public roads.


There are some difficult cases on the horizon.


Courts will have to determine who is responsible in situations where a human or a vehicle could have intervened but did not. On the one hand, courts tend to avoid questions of machine liability if they can find a human operator to blame. A court recently placed the blame of an airplane accident exclusively on the airline for incorrectly balancing the cargo hold despite evidence the autopilot was engaged at the time of the accident.7 On the other hand, there is presumably a limit on how much responsibility a company can transfer to vehicle owners merely because they clicked on a terms of service agreement. In the fatal Tesla crash last year, the deceased driver seemingly assumed the risk of engaging the autopilot. The pedestrian killed this year by an Uber driverless car took on no such obligation. In its centuries of grappling with new technologies, however, the common law has seen tougher problems than these and managed to fashion roughly sensible remedies. Uber will likely settle its case with the pedestrian’s family. If not, a court will sort it out.

Some point to the New Trolley Problem, which posits that cars will have to make fine-grained moral decisions about whom to kill in the event of an accident. I have never found this hypothetically particularly troubling. The thought experiment invites us to imagine a robot so poor at driving that, unlike you or anyone you know, the car finds itself in a situation that it must kill someone. At the same time, the robot is so sophisticated that it can somehow instantaneously weigh the relatively moral considerations of killing a child versus three elderly people in real time. The New Trolley Problem strikes me as a quirky puzzle in search of a dinner party.

Technology challenges law not when it shifts responsibility in space and time, as driverless cars may, but when the technology presents a genuinely novel affordance that existing legal categories failed to anticipate.

Imagine one manufacturer stands out in this driverless future. Not only does its vehicle free occupants from the need to drive while maintaining a sterling safety record, it adaptively reduces its environmental impact. The designers of this hybrid vehicle provide it with an objective function of greater fuel efficiency and the leeway to experiment with system operations, consistent with the rules of the road and passenger expectations. A month or so after deployment, one vehicle determines it performs more efficiently overall if it begins the day with a fully charged battery. Accordingly, the car decides to run the gas engine overnight in the garage—killing everyone in the household.

Imagine the designers wind up in court and deny they had any idea this would happen. They understood a driverless car could get into an accident. They understood it might run out of gas and strand the passenger. But they did not in their wildest nightmares imagine it would kill people through carbon monoxide poisoning.

This may appear, at first blush, to be just as easy a case as the driverless car collision. It likely is not. Even under a strict liability regime—which dispenses with the need to find intent or negligence on the part of the defendant—courts still require the plaintiff to show the defendant could foresee at least the category of harm that transpired. The legal term is "proximate causation." Thus, a company that demolishes a building with explosives will be liable for the collapse of a nearby parking garage due to underground vibrations, even if the company employed best practices in demolition. But, as a Washington court held in 1954, a demolition company will not be liable if mink at a nearby mink farm react to the vibrations by instinctively eating their young.2 The first type of harm is foreseeable and therefore a fair basis for liability; the second is not.

We are already seeing examples of emergent behavior in the wild, much less in the university and corporate research labs that work on adaptive systems. A Twitter bot once unexpectedly threatened a fashion show in Amsterdam with violence, leading the organizers to call the police.3 Tay—Microsoft’s ill-fated chatbot—famously began to deny the Holocaust within hours of operation.5 And who can forget the flash crash of 2010, in which high-speed trading algorithms destabilized the market, precipitating a 10% drop in the Dow Jones within minutes.4

As increasing numbers of adaptive systems enter the physical world, courts will have to reexamine the role foreseeability will play as a fundamental arbiter of proximate causation and fairness.1 That is a big change, but the alternative is to entertain the prospect of victims without perpetrators. It is one thing to laugh uneasily at two Facebook chatbots that unexpectedly invent a new language.a It is another to mourn the loss of a family to carbon monoxide poisoning while refusing to hold anyone accountable in civil court.

We lawyers and judges have our work cut out for us. We may wind up having to jettison a longstanding and ubiquitous means of limiting liability. But what role might there be for system designers? I certainly would not recommend stamping out adaptation or emergence as a research goal or system feature. Indeed, machines are increasingly useful precisely because they solve problems, spot patterns, or achieve goals in novel ways no human imagined.

Nevertheless, I would offer a few thoughts for your consideration. First, it seems to me worthwhile to invest in tools that attempt to anticipate robot behavior and mitigate harm.b The University of Michigan has constructed a faux city to test driverless cars. Short of this, virtual environments can be used to study robot interactions with complex inputs. I am, of course, mindful of the literature suggesting that the behavior of software cannot be fully anticipated as a matter of mathematics. But the more we can do to understand autonomous systems before deploying them in the wild, the better.

Second, it is critical that researchers be permitted and even encouraged to test deployed systems—without fear of reprisal. Corporations and regulators can and should support research that throws curveballs to autonomous technology to see how it reacts. Perhaps the closest analogy is bug bounties in the security context; at a minimum, terms of service agreements should clarify that safety-critical research is welcome and will not be met with litigation.

Finally, the present wave of intelligence was preceded by an equally consequential wave of connectivity. The ongoing connection firms now maintain to intelligence products, while in ways problematic, also offers an opportunity for better monitoring.6 One day, perhaps, mechanical angels will sense an unexpected opportunity but check with a human before rushing in.

None of these interventions represent a panacea. The good news is that we have time. The first generation of mainstream robotics, including fully autonomous vehicles, does not present a genuinely difficult puzzle for law in this law professor’s view. The next well may. In the interim, I hope the law and technology community will be hard at work grappling with the legal uncertainty that technical uncertainty understandably begets.

Back to Top

Back to Top

Back to Top

    1. Calo, R. Robotics and the lessons of cyberlaw. California Law Review 513, 103 (2015).

    2. Foster v. Preston Mill Co. 268 P.2d 645 (Wash. 1954).

    3. Hill, K. Who do we blame when a robot threatens to kill people? Spinter.com (Feb. 15, 2015); http://bit.ly/2FFKszl

    4. Hope, B. and Ackerman, A. 'Flash crash' overhaul is snarled in red tape. Wall Street Journal (May 5, 2015); http://on.wsj.com/2ph9w4D

    5. Price, R. Microsoft is deleting its AI chatbot's incredibly racist tweets. Business Insider (Mar. 24, 2016); http://read.bi/1ZwcFYZ

    6. Walker Smith, B. Proximity-driven liability. Georgetown Law Journal 1777, 102 (2014).

    7. Vladeck, D.C. Machines without principles. Washington Law Review 117, 89 (2014).

    a. Tim Collins and Mark Prigg, "Facebook shuts down controversial chatbot experiment after AIs develop their own language to talk to each other," Daily Mail (Jul. 31, 2017); http://dailym.ai/2vnk47J Also see "Did Facebook Shut Down an AI Experiment Because Chatbots Developed Their Own Language?" Snopes.com (Aug. 1, 2017); http://dailym.ai/2vnk47J (concluding that Facebook did not necessarily expect the behavior but nor did it shut down the experiment as a result of it).

    b. For a prescient discussion, see Jeffrey Mogel, "Emergent (Mis) Behavior vs. Complex Software Systems." ACM SIGOPS 40, 4 (Oct. 2006), 293–304.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More