MORAL MACHINES

BENJAMIN K. BERGEN

Associate professor, cognitive science, UC San Diego; author, Louder Than Words: The New Science of How the Mind Makes Meaning

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Machines make decisions for us. A trading machine in Manhattan detects a change in stock prices and decides in microseconds to buy millions of shares of a tech company. A driving machine in California detects a pedestrian and decides to turn the wheels to the left.

Whether these machines are “thinking” or not isn’t the issue. The real issue is the decisions we’re empowering them to make. More and more, these are consequential. People’s savings depend on them. So do their lives. And as machines begin to make decisions that are more consequential for humans, for animals, for the environment, and for national economies, the stakes get higher.

Consider this scenario: A self-driving car detects a pedestrian running out in front of it across a major road. It quickly apprehends that there’s no harm-free course of action. Remaining on course would cause a collision and inevitable harm to the pedestrian. Braking quickly would cause the car to be rear-ended, with the attendant damage and possible injuries. So would veering off-course. What protocol should a machine use to decide? How should it quantify and weigh different types of potential harm to different actors? How many injuries of what likelihood and severity are worth a fatality? How much property damage is worth a 20 percent chance of whiplash?

Questions like these are hard to answer. They’re questions you can’t solve with more data or more computing power. They’re about what’s morally right. We’re charging machines with moral decisions. Faced with a conundrum like this, we often turn to humans as a model. What would a person do? Let’s re-create that in the machine.

The problem is that when it comes to moral decisions, humans are consistently inconsistent. What people say they believe is right and what they actually do often don’t match (recall the case of Kitty Genovese). Moral calculus differs over time and from culture to culture. And the details of each scenario affect people’s decisions: Is the pedestrian a child or an adult? Does the pedestrian look intoxicated? Does he look like a fleeing criminal? Is the car behind me tailgating?

What’s the right thing for a machine to do?

What’s the right thing for a human to do?

Science is ill-equipped to answer moral questions. Yet the decisions we’ve already handed to machines guarantee that someone will have to answer them, and there may be a limited window left to ensure that that someone is human.