Human moral intuitions cannot reliably guide ethical decisions of autonomous machines

Machine ethics is “a fast-growing field in which researchers develop appropriate ethical models for autonomous machines,” according to Tae Wan Kim, assistant professor of business ethics. This field of study is becoming increasingly important as artificial intelligence (AI) continues to find new applications in business and retail environments.

In a paper titled “Toward Non-Intuition-Based Machine Ethics,” Kim and John Hooker, T. Jerome Holleran Professor of Business Ethics and Social Responsibility, professor of operations research, argue that the dominating “intuition-based” model of machine ethics — for which training data for AI is based on ethical decisions made by humans — is a poor standard for decision-making by autonomous machines. The paper points to evidence that ethicists’ decisions are susceptible to situational cues and are not significantly different from those of ordinary people.

“A prime motivation for developing autonomous vehicles is that human error is a leading cause of accidents, and autonomous vehicles minimize human involvement,” Kim said. “Thus if human moral intuitions are not reliable, machine ethics should be developed so as to avoid human moral intuition.”

Instead, Kim said, a deontological — or principles-based — approach would remove human biases inherent to intuition-based models. Moreover, the processes and outcomes can be rigorously expressed via computational language, which is more consistent and transparent. “Once humans identify the logic of action plans, the machine can perform rigorous ethical reasoning,” Kim said.

The model that Kim and Hooker express in the paper uses quantified modal logic to precisely state ethical principles: whether an action is rationally generalizable, whether an action violates joint autonomy and whether an action is sufficiently utilitarian. “By doing so, we clarified the aspects of the principles that were traditionally regarded as ambiguous,” Kim said.