Who Lives and Who Dies? Just Let Your Car Decide

By Tim Cramton

For more of our coverage of the issues below see Steve Mach’s discussion of insuring autonomous vehicles here.

Self-driving cars, also known as Autonomous Vehicles (AVs), are a hot topic these days. Many companies—including tech giants like Google and Apple, automakers like Toyota and Tesla—view self-driving cars as the future of transportation and the auto industry. This raises a lot of questions—especially regarding safety.

In fact, autonomous vehicles are supposed to be very safe. Widespread adoption of AVs promises to drastically reduce car accidents resulting from human error, which comprise over 90 percent of car accidents and cost over $400 billion every year. More importantly, AVs could reduce car accident fatalities by 95 percent.

But some accidents cannot be avoided. What happens in situations of unavoidable harm, where the AV becomes required to choose between two evils?

Say, for example, you are driving (or rather, being driven by) an AV. Suddenly, a group of five pedestrians rushes out directly in the AV’s path. The car cannot stop in time; the only way to prevent the five pedestrians from being killed is for your AV to swerve out of the way crash into a wall—instantly killing you. Either way, people die. It’s just a question of who and how many. How will AVs be programmed to make those decisions, and what should the AVs be programmed to do?

More importantly, is America ready for computers to make those life-or-death decisions for us where an algorithm determines who lives and who dies? It’s a question straight out of a science fiction movie.

That’s exactly what a recently published study tried to answer. In the study, titled Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?, scientists asked regular people what they thought an AV should do in that situation: kill the driver and save the pedestrians, or save the driver and kill the pedestrians? The results found that people generally believed the cars should be programmed to make utilitarian decisions—that is, to minimize overall damage by accepting the smaller harm for the greater good. Similar to the famous “Trolley Problem,” utilitarian principles dictate that the one driver should be sacrificed to save the five pedestrians to minimize overall loss of life.

Unfortunately, the answer here only raises more questions. Who should be liable for the damages resulting from an automated decision? If the pedestrians caused the accident, should the driver still be sacrificed to save the most lives? Are automated self-sacrificing decisions legally enforceable? For example, if I don’t want my car to kill me, could the law prevent me from re-programming my car to be self-preservationist? Should we program cars to value some lives more than others, favoring children over the elderly or protecting the President from self-sacrifice? These big questions still do not have answers, and the law has not yet addressed them.

There is also this caveat: the study found that respondents wanted other people to drive utilitarian cars more than they wanted to buy a car that may decide to kill them. And that makes sense, following the classic social dilemma of self-preservation: people generally support utilitarian theories of sacrifice for the greater good, but only to the extent that someone else is sacrificed.

While this seems like a philosophical question, these findings have economic implications. Generally, consumers want to buy products that reflect their moral values. If people believe self-driving cars should value human life according to utilitarian principles, then that may be how car companies will make them. However, if people do not actually want to drive these potentially self-sacrificing utilitarian cars, then no one will buy them.

Of course, this question raises legal issues as well. Currently, only four states and Washington, D.C. have passed laws regarding AVs. However, these laws would expressly prohibit the situation posed in the study. Generally, existing laws require a human driver to be in the car and be able to take control of the car in case of emergency. In our scenario, the instant the car recognized an impending collision with the pedestrians, the car’s operating system would revert to manual control—leaving the moral dilemma (and liability) to the driver.

Federal guidelines, laid out by the National Highway Traffic Safety Administration, are consistent with state law. Specifically, the guidelines recommend that every AV be capable of ceding full control of all safety-critical functions back to the driver in a safe, simple, and timely manner. Again, these regulations eliminate the possibility of our scenario by relegating the critical decision back into human hands. Unfortunately, no guideline or law exists to explain exactly how giving back control to drivers at the instant of imminent harm is the safest option.

But maybe people just feel more comfortable being in control of their own fate. And, that answers our big question: No, we’re not ready to for computers to make our life-or-death decisions—at least when we’re driving.