Computer says no: abrogating our ethical responsibility to machines

 January 7th, 2021

Vivek Nallur is an assistant professor in computer science at University College Dublin’s School of Computer Science. His research interests include how to implement and verify ethics in autonomous machines. 

Imagine you spot a runaway tram that will mow down five people - unless you pull a lever that will divert it to kill just one person. What should you do? This infamous “trolley problem” is a moral conundrum common in philosophy and ethics 101 classes. But like many such prickly questions, there is no right answer, and hypothetical scenarios are never-ending. What if you divert the tram and the one person it kills is pregnant with quadruplets?

Questions like this - composed to be unsolvable - illustrate the problem with designing and implementing ethics in machines. After all, if human beings can’t agree on what’s right and wrong, how can we expect computers to figure it out? 

And yet we do. 

“Another question people came up with is whether an autonomous car should hit a pregnant woman or an old man if those are the only choices. I wouldn’t trust its judgment on that,” admits Vivek. “But I would trust it to drive inside the lane, I would trust it to stop at a stoplight, I would trust it to never break a red light. So the envelope of things that I trust the vehicle with keeps expanding. And eventually that might form 99.9% of my lived experience.”

Lived experience is what really counts when it comes to determining ethics, in his opinion. We may think we subscribe to a particular school of philosophy, but when our own life/interests are on the line, we may react differently. Our emotional responses and logical responses may not coincide.

“I'm not saying that this is a flaw in human beings; it is just how nature designed us. Our rational mind works in one way and our instinctive reactions are sometimes completely irrational.”

Can occasionally-irrational humans implement and verify ethics in machines?

“I’m hopeful. That’s why I’m trying to do it. I’ll be honest and say I don’t know how we will do it. I don’t know if we will ever be able to have a completely ethical machine. But we might end up with a satisfactory one.”

AI used to have a problem with a lack of data - not anymore, thanks to smartphones. The problem now is that “notions of what is acceptable change drastically, and sometimes in a very short period of time.”

When what’s right and wrong is subjective - or simply uncomfortable - sometimes it suits us to abrogate ethical responsibility to machines. 

“It is ‘the computer says no’ idea,” explains Vivek. “‘I would have given you the loan because that is the correct thing to do. But my hands are tied. The computer has decided.’”

Far too much responsibility for machine ethics is delegated to human programmers too. Much has been written of often unintentional bias in algorithms in favour of the mostly white American men who write them. But how are they really supposed to speak, imagine and code for all of us? 

“For me, the problem is that we are expecting computer scientists to answer the questions that society should be answering. I think that is unfair.”

Vivek says we need philosophers, sociologists, lawyers and a wide variety of minds to ponder the big - and little - questions that have no “right” answers.  

Laws by their nature serve the majority and not individuals but “the problem is that when this happens with machines, it happens at scale. 100 people affected is not the same as 100,000 people or 100 million people. If the next big thing that comes along serves five billion people correctly and yet leaves out 100 million people, is that okay? The answer to that question has to be answered by us as a society, not by computer scientists. It is not an AI problem, it is a democracy problem. It is a social problem,” Vivek emphasises. “You need to tell me what the AI should do.”

Programming a game, for instance, is easy because “I know what winning and losing looks like. But with society, I don't.”

Computers have been gaining our trust by osmosis since they were first invented. Trust, as Vivek notes, is earned. Many of us are happy to travel in planes that fly on autopilot most of the time. We don’t tend to second-guess our satellite navigation either. 

“The more we use something, the more we understand the things we can trust it for. So we do trust our tools. The only question is how much?” 

This may be where interdisciplinary research - and the input of governments and citizens - is needed to oversee AI ethical principles. The more diverse the voices involved, the more likely we are to achieve a morality code that covers a spectrum from broad to nuanced.

Though perhaps machines will beat us to the punch. 

“A philosopher said wouldn’t it be interesting if AI came up with an ethical position that human beings had never thought of before,” says Vivek. “And how could we refute that?”

 

This article was brought to you by UCD Institute for Discovery - fuelling interdisciplinary collaborations.