Sorry, there are no polls available at the moment.

Driverless Cars: Can There Be a Moral Algorithm?

The death in May of a technology expert driving a Tesla driverless car was surely a sad event for his family, but no less a shock for a company and an industry developing such a car. The driver, Joshua Brown, had test-driven it over 45,000 miles and was, along with the company, confident about its safety.  His car collided with a tractor-trailer because its dangerous presence was apparently not picked up by the car’s alarm system. The fact that the accident was announced by a federal agency in late June, not by the company in May when the accident happened, suggests that the company knew it had a public relations disaster on its hands. Its comfortable confidence that it had solved the safety problem was proved wrong in the worst possible way, not only for Tesla but also for the burgeoning driverless auto industry. Undeterred apparently, BMW has just announced that it would have a complete driverless vehicle ready for the market by 2021.

By chance, Science magazine in its June 24 issue, a week before the accident was announced, published two pertinent articles. One of them was an interesting research report by three scientists, “The Social Dilemma of Autonomous Vehicles” (aka AVs), making use of surveys it had conducted to determine how the public would react to different moral dilemmas using the cars. The other article, a commentary by Joshua Greene, a psychologist prominent for his research on psychology and morality, emphasized the need for a “moral algorithm” to deal with those dilemmas.

The scientific study carried out six online surveys, asked people to respond to a variety of scenarios, ranging from their personal judgments on risks to judging regulator possibilities and their interest in buying such a car. I can’t summarize all of them here, but one of them should be enough to capture the methodology and characteristic outcome. What should be done, for instance, in a traffic situation that involved imminent unavoidable harm– “(a) killing several pedestrians or one pedestrian, (b) killing one pedestrian or its own passenger, and (c) killing several pedestrians or its own passengers?” The overwhelming response was “that it would be more moral for AVs to sacrifice their own passengers when this sacrifice would save a greater number of lives overall”–a classic utilitarian judgment.

But the other five scenarios added different variables. What about sacrificing not just a “passenger” but oneself? Or sacrificing one’s family? Or killing 10 pedestrians instead of one? The responses became more varied with more change in the details. Utilitarian responses tended to decline with different variables. Other important variables to consider become apparent when the focus is on the car driver, the company that manufactures the cars, and the government that regulates them—and a category they did not add but I will, that of insurers.

The surveys also showed a serious tension between reducing pedestrians deaths while maximizing the driver’s personal protection. Drivers will want the latter, but regulators might come out on the utilitarian side, reducing harm to others. The researchers conclude by saying that a “moral algorithm” to take account of all these variation is needed, and that they “will need to tackle more intricate decisions than those considered in our survey.” As if there were not enough already.

Just who is to do the tackling? And how can an algorithm of that kind be created?  Joshua Greene has a decisive answer to those questions: “moral philosophers.” Speaking as a member of that tribe, I feel flattered. He does, however, get off on the wrong diplomatic foot by saying that “software engineers–unlike politicians, philosophers, and opinionated uncles—don’t have the luxury of vague abstractions.” He goes on to set a high bar to jump. The need is for “moral theories or training criteria sufficiently precise to determine exactly which rights people have, what virtue requires, and what tradeoffs are just.” Exactly!

I confess up front that I don’t think we can do it.  Maybe people in Greene’s professional tribe turn out exact algorithms with every dilemma they encounter.  If so, we envy them for having all the traits of software engineers.  No such luck for us. We will muddle through on these issues as we have always done—muddle through because exactness is rare (and its claimants suspect), because the variables will all change over time, and because there is varied a set of actors (drivers, manufacturers, purchasers, and insurers) each with different interests and values.

Here is how I predict it will go, and it does not take people as smart as we philosophers to figure that out. The drivers will want to feel reasonably safe in AVs and will not buy dangerous ones. The manufacturers will do their best to make that kind of car, or they won’t be able to sell them. And if the insurers, uncertain of the risks, charge high premiums for policies, that will deter buyers as well as hurt the manufacturers. As for the regulators–with the millions of recent recalls of our old-fashioned kind of human-driven cars for various risks in mind—they will be meticulous (even exact) in looking for defects,  and unwilling to give  manufacturers desired trade-offs on safety and sales any slack.

All of that will be messy and contentious, and there will be no algorithms. Where does that leave us moral philosophers?  In this instance, I see no obvious need for our skills. While I am not always a utilitarian, in this case–with so many threats to health and life possible—it needs to be a dominant value. But there could be a call by drivers to be left free to determine how much danger they might be allowed by regulators to run. Beyond that, I see enough overlapping interests of the various moral actors to think that some sensible agreements can be reached through available democratic means. But that minimally requires transparency (don’t delay announcing failures and mistakes) and integrity by all concerned, what we call virtue ethics in philosophy. The greatest hazard is that the potentially large amount of money to be made in the AV industry can be likened to the large truck that blocked the driver’s view. Money is always a big truck.

Daniel Callahan, cofounder and President Emeritus of The Hastings Center, is author most recently of The Five Horsemen of the Modern World.

Published on: July 5, 2016
Published in: Hastings Bioethics Forum, Science and the Self

Receive Forum Updates

5 comments on “Driverless Cars: Can There Be a Moral Algorithm?

  1. David Roscoe on

    Great post, Dan. Our national DNA accords large premium value to personal choice. Rather than a political or bureaucratic process to reach a single compromise solution on inherently uncompromisable personal value systems, I can imagine car manufacturers eventually offering consumers moral model “options” at the time of purchase: protect me and my family at all costs; minimize the total number of deaths; favor kids over old people at some prescribed ratio; etc.

    Of course this raises the question of whether those individual choices would be made transparent to anyone else, and if so, to whom and how?

    I can also see some consumers so torn on the “right answer” that their choice might be a real time random selection by the AI software from one of the many moral models offered by the manufacturer. Some people might prefer a touch of fate to having to make and live with a Sophie’a choice.

    If we think the trolley car dilemma for self-driving cars are difficult, this is just the beginning!

  2. Klim on

    It will be many years before we drive theese cars. It is all about awareness of a man who must trust a robot . It will be tough. I personally would not be using such a car, maybe my children.

  3. caradesu on

    driverless car is a bit dangerous, it is not perfect yet. i wouldn’t use that kind of car because i afrad to use it.