Who Lives and Who Dies? Just Let Your Car Decide
November 16, 2015Feature . Feature Img ArticleFor more of our coverage of the issues below see Steve Mach’s discussion of insuring autonomous vehicles here.
Self-driving cars, also known as Autonomous Vehicles (AVs), are a hot topic these days. Many companies—including tech giants like Google and Apple, automakers like Toyota and Tesla—view self-driving cars as the future of transportation and the auto industry. This raises a lot of questions—especially regarding safety.
In fact, autonomous vehicles are supposed to be very safe. Widespread adoption of AVs promises to drastically reduce car accidents resulting from human error, which comprise over 90 percent of car accidents and cost over $400 billion every year. More importantly, AVs could reduce car accident fatalities by 95 percent.
But some accidents cannot be avoided. What happens in situations of unavoidable harm, where the AV becomes required to choose between two evils?
Say, for example, you are driving (or rather, being driven by) an AV. Suddenly, a group of five pedestrians rushes out directly in the AV’s path. The car cannot stop in time; the only way to prevent the five pedestrians from being killed is for your AV to swerve out of the way crash into a wall—instantly killing you. Either way, people die. It’s just a question of who and how many. How will AVs be programmed to make those decisions, and what should the AVs be programmed to do?
More importantly, is America ready for computers to make those life-or-death decisions for us where an algorithm determines who lives and who dies? It’s a question straight out of a science fiction movie.
That’s exactly what a recently published study tried to answer. In the study, titled Autonomous Vehicles Need Experimental Ethics: Are We Ready for Utilitarian Cars?, scientists asked regular people what they thought an AV should do in that situation: kill the driver and save the pedestrians, or save the driver and kill the pedestrians? The results found that people generally believed the cars should be programmed to make utilitarian decisions—that is, to minimize overall damage by accepting the smaller harm for the greater good. Similar to the famous “Trolley Problem,” utilitarian principles dictate that the one driver should be sacrificed to save the five pedestrians to minimize overall loss of life.
Unfortunately, the answer here only raises more questions. Who should be liable for the damages resulting from an automated decision? If the pedestrians caused the accident, should the driver still be sacrificed to save the most lives? Are automated self-sacrificing decisions legally enforceable? For example, if I don’t want my car to kill me, could the law prevent me from re-programming my car to be self-preservationist? Should we program cars to value some lives more than others, favoring children over the elderly or protecting the President from self-sacrifice? These big questions still do not have answers, and the law has not yet addressed them.
There is also this caveat: the study found that respondents wanted other people to drive utilitarian cars more than they wanted to buy a car that may decide to kill them. And that makes sense, following the classic social dilemma of self-preservation: people generally support utilitarian theories of sacrifice for the greater good, but only to the extent that someone else is sacrificed.
While this seems like a philosophical question, these findings have economic implications. Generally, consumers want to buy products that reflect their moral values. If people believe self-driving cars should value human life according to utilitarian principles, then that may be how car companies will make them. However, if people do not actually want to drive these potentially self-sacrificing utilitarian cars, then no one will buy them.
Of course, this question raises legal issues as well. Currently, only four states and Washington, D.C. have passed laws regarding AVs. However, these laws would expressly prohibit the situation posed in the study. Generally, existing laws require a human driver to be in the car and be able to take control of the car in case of emergency. In our scenario, the instant the car recognized an impending collision with the pedestrians, the car’s operating system would revert to manual control—leaving the moral dilemma (and liability) to the driver.
Federal guidelines, laid out by the National Highway Traffic Safety Administration, are consistent with state law. Specifically, the guidelines recommend that every AV be capable of ceding full control of all safety-critical functions back to the driver in a safe, simple, and timely manner. Again, these regulations eliminate the possibility of our scenario by relegating the critical decision back into human hands. Unfortunately, no guideline or law exists to explain exactly how giving back control to drivers at the instant of imminent harm is the safest option.
But maybe people just feel more comfortable being in control of their own fate. And, that answers our big question: No, we’re not ready to for computers to make our life-or-death decisions—at least when we’re driving.
You may also like
- April 2024
- March 2024
- February 2024
- November 2023
- October 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- November 2019
- October 2019
- September 2019
- April 2019
- February 2019
- December 2018
- November 2018
- October 2018
- September 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- May 2017
- April 2017
- March 2017
- February 2017
- December 2016
- November 2016
- October 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- August 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- June 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011
- August 2011
- April 2011
- March 2011
- November 2010
- October 2010
- September 2010