Grand Rapids Press

Thursday, December 27, 2018


How should self-driving cars make moral decisions?

Rabbi David Krishef

Based on an NPR report, Should Self-Driving Cars Have Ethics?, we ask the panel: Should self-driving cars be programmed with a moral decision- making capability? For example, to avoid a head-on collision, should the car swerve into the crosswalk, possibly killing pedestrians? Who’s life is more important — the passengers or the pedestrians?

The Rev. Colleen Squires, minister at All Souls Community Church of West Michigan, a Unitarian Universalist Congregation, responds:

“This is one of the most dangerous types of discussion we as human beings can enter into. The answer needs to be all life is equal. There is no calculation or flow chart that can accurately determine which person’s life is more worthy to live. Maybe we should park the self-driving cars until we can understand the limitations of the human mind. The fact that these manufacturers are asking the general public their opinions to these questions tells me they are beyond understanding the magnitude of the consequences.”

Fred Stella, the pracharak (outreach minister) for the West Michigan Hindu Temple, responds: “It’s easy to say that if we could program cars to recognize the difference between things and people or people and animals, it’s likely that injuries to people could be minimized. But in terms of ! grading the lives of all the humans who might be involved in a! n accident, I would hope the car would have no bias built in. We already live in a world where we accept the risk that all of us — pedestrians and car passengers alike — could be struck by a wayward vehicle anytime. Once we have reliable, solidly tested autonomous autos, it is projected that we will be much safer than now as we navigate the road with drinkers, texters and others who have little regard for the welfare of others.”

The Rev. Ray Lanning, a retired minister of the Reformed Presbyterian Church of North America, responds: “What a fascinating possibility of programming a device to make moral decisions for us! It should be pointed out that the ‘moral decisions’ contemplated are all examples of the ‘tragic choice’ between the lesser of two evils. That kind of moral reasoning is an accommodation to the fallen nature of mankind, and the world we have made for ourselve! s. It is more animal than human in its content and process.

“Whether or not the self-driving automobile will be safer remains to be seen. The optimism of inventors and engineers knows no bounds. The fact is, these machines will be designed and built by fallible men and women. Inevitably, there will be many design flaws, manufacturing faults and electro-mechanical failures or malfunctions. Not to mention tragic unforeseen consequences! ‘Our best works in this life are all imperfect and defiled with sin’ (Heidelberg Catechism, Q. 62); that statement is as true of our material products as it is of our moral reckoning and decisions.”

Father Kevin Niehoff, O.P., a Dominican priest who serves as adjutant judicial vicar, Diocese of Grand Rapids, responds: “The problematic question with attempting to program an e! thical response for machines — and self-driving cars will always be a! machine, even if artificial intelligence is employed — is whose ethical structure will be used to determine what morality the machine will have?

“I do not believe self-driving cars ought to be introduced to the traveling public unless it is on a closed system of transportation. I do not see the possibility of an objective ethical system of programming to be used in the operation of these types of cars. I would not ride in such a vehicle.”

Father Michael Nasser, who writes from an Eastern Christian perspective and is pastor of St. Nicholas Orthodox Christian Church, responds: “While I will leave it to others to address this specific case, it highlights the new situation we find ourselves in. As more and more machines are programmed to ‘think,’ we begin to see the deficiency in thought that is devoid of the moral reasoning that only humans are capable of. Ironically, it is the transfer of more thinking from humans to machines that may — hopefully — bring a renewal in the awareness of the need for moral and ethical thinking by the programmers, and not just the programmed.”

My response:

First of all, by removing the distractions inherent to a human brain and body, a well-programmed self-driving car will eliminate a large percentage of accidents. To the point of the question, if you think that human beings are able to make a well-reasoned moral choice in the split second available as an accident begins to occur, you are fooling yourself. And if you argue that self-driving cars should not make choices, you are doubly fooling yourself. Not making a choice is itself a choice. Any reasoning built into an artificial intelligence engine will be no worse than a human reaction, and has great pot! ential to be better.

Share on FacebookShare on Google+Tweet about this on TwitterShare on LinkedIn