Sorry, there are no polls available at the moment.

Imperfect Solutions to Driverless Car Dilemmas

Three rules for driverless vehicles were announced by the German Transport Minister, Alexander Dobrindt, in a September 8th interview with Wirtschafts Woche.  In English translation the rules are:

(1) “It is clear that property damage takes always precedence of personal injury.

(2) “There must be no classification of people, for example, on the size, age and the like.”

(3) “If something happens, the manufacturer is liable.”

The first and third rules are meant to settle legal and liability concerns. The second speaks to the dilemma of what a driverless car should do when confronted with either striking and killing pedestrians or taking an action that will lead to the deaths of those in the car? While such situations will certainly be unusual, they will occasionally occur, and have taken on significance in the adoption of driverless vehicles. This rule would appear to ban sensors and software in vehicles that could detect and evaluate whether the car was about to strike children rather than senior citizens.  It is not specific as to whether the car might, for example, drive off a bridge killing passengers in order to save a larger number of pedestrians.

Is the German rule satisfactory? Is it likely to be adopted by other governments? I think “no” on both counts. Regardless, Dobrindt also announced the creation of an ethics commission to work through the specific of how these rules might be applied for future law.

So called, “trolley problems,” first proposed by the philosopher Phillipa Foote in 1967, have become popular in the teaching of ethics and for highlighting the complexity of human moral psychology. In classic trolley car problems choices must be made as to whether to take an action, such as redirect a train to a different track or push a large man off a bridge, to save the lives of five people. These actions always entail the death of one person, who might otherwise have lived. Subtle distinctions in the situation can lead to very different choices. Most people will throw a switch that redirects a train to another track to save five lives for one life, but are loath to push a man off a bridge to certain death even if that would save the lives of five workers down the track.

The application of trolley problems to driverless vehicles was first proposed in November 2012 by New York University psychologist Gary Marcus in The New Yorker. The importance of these problems in the context of driverless vehicles is that they illustrate that driving is something more than a bounded rule-based practice – stop at a stop sign or look for a child and prepare to brake if you see a ball near the road.  Driving is a social practice. Will, for example, a driverless car be capable of interpreting the actions and nonverbal expressions of other drivers at a four-way stop, or the hand gestures of a police officer directing traffic? Designing sensors and software that can adequately deal with these situations has been particularly challenging for the engineers building autonomous vehicles.

Tesla announced in July that one of its experts (Joshua Brown) died in May while using autopilot, the driverless highway feature the company makes available to owners of its Model S.  In a July 5th blog post, Daniel Callahan commented on Tesla’s tardiness.  Callahan also noted the coincidental publication a week early of two articles in Science that discussed public attitudes to life and death choices, which driverless vehicles might confront. In one of the articles, Joshua Greene, a Harvard psychologist prominent for research on moral psychology using fMRI machines for brain imaging, proposes that moral philosophers develop moral algorithms for what the vehicle should do when confronted with a difficult life and death dilemma. Callahan rejects this proposal. “Speaking as a member of that tribe [moral philosophers], I feel flattered … I confess up front that I don’t think we can do it.”

Even though I have written books and articles about building computers and robots that are sensitive to moral considerations and factor these into their choices and actions, I agree with Callahan. There is no morally “correct” answer to such dilemmas. This is a new situation with no clear precedent and requires a new norm.

To illustrate the depth of this dilemma, consider the finding of opinion research on public attitudes toward driverless car dilemmas.  These were introduced in the other article published by Science in July, “The Social Dilemma of Autonomous Vehicles.”  The researchers found that the public favored solutions to the dilemmas in which the least number of people died–in other words, simple utilitarian calculations. However, a majority of the respondents would not buy a car that might select an action that would kill the vehicle’s occupants.  Surprise, surprise!

Why is this finding so important? According to a 2003-2007 study by the National Highway Traffic Safety Administration, “human error” is a factor in 93 percent of traffic accidents.  The primary moral good offered by driverless vehicles is that they will dramatically lower the number of traffic fatalities.  In other words, if people will not buy a driverless vehicle that might kill them or their family because of how it will handle a once-in-a-trillion-mile situation, then many additional lives will be lost because of fewer driverless vehicles on roads. The short-term utilitarian calculations of a car could result in thousands of deaths that might otherwise be avoided. The longer-term utilitarian calculation, the greatest good for the greatest number, would suggest that the vehicles should not be designed to take actions that would kill their occupants. For moral philosophers, and German politicians, this raises the age-old dilemma as to whether the rights of individual pedestrians, for example, should be sacrificed for the greater moral good.

To make matters worse, the pollsters discovered that the public rejected the idea of governments creating standards for how vehicles should address these difficult dilemmas. I’ve proposed that a body of representative stakeholders should reflect upon the various solutions and then recommend new norms for driverless vehicles, however imperfect. To be sure, even if there were an established norm and that norm could be captured in an algorithmic procedure, driverless vehicles in the near term are unlikely to have sufficient information to make appropriate choices in all situations. Would the car, for example know how many passengers are in the vehicle or how many pedestrians it might hit, and which of these are children (excluded by the German rule)?  Nevertheless, applied ethics is often about making decisions under conditions of uncertainly when the available information is either inadequate or inaccurate.

Coming up with an acceptable norm will not be easy. Nor is it clear whether all manufacturers should be required to follow the proposed norm. This might for example, be a feature used by various manufacturers to distinguish their products. Nevertheless, the adoption of driverless cars will force governments, manufacturers, and, I hope, citizens, to work through an imperfect resolution to such dilemmas.

Wendell Wallach is a senior advisor to The Hastings Center and a scholar at Yale University’s Interdisciplinary Center for Bioethics. His latest book is, A Dangerous Master: How to Keep Technology from Slipping Beyond our Control.

Published on: September 14, 2016
Published in: Hastings Bioethics Forum, Science and the Self

Receive Forum Updates

Recent Content