WHEN THINKING MACHINES BREAK THE LAW

BRUCE SCHNEIER

Security technologist; fellow, Berkman Center for Internet and Society, Harvard Law School; chief technical officer, Co3 Systems, Inc.; author, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Last year, two Swiss artists programmed a Random Botnot Shopper, which every week would spend $100 in bitcoin to buy a random item from an anonymous Internet black market—all for an art project on display in Switzerland. It was a clever concept, except there was a problem. Most of the stuff the bot purchased was benign—fake Diesel jeans, a baseball cap with a hidden camera, a stash can, a pair of Nike trainers—but it also purchased ten ecstasy tablets and a fake Hungarian passport.

What do we do when a machine breaks the law? Traditionally, we hold the person controlling the machine responsible. People commit the crimes; the guns, lock picks, or computer viruses are merely their tools. But as machines become more autonomous, the link between machine and controller becomes more tenuous.

Who’s responsible if an autonomous military drone accidentally kills a crowd of civilians? Is it the military officer who keyed in the mission, the programmers of the enemy detection software that misidentified the people, or the programmers of the software that made the actual kill decision? What if those programmers had no idea that their software was being used for military purposes? And what if the drone could improve its algorithms by modifying its own software based on what the entire fleet of drones learned on earlier missions?

Maybe our courts can decide where the culpability lies, but that’s only because whereas current drones are autonomous, they’re not very smart. As drones get smarter, their links to the humans who built them become more tenuous.

What if there are no programmers, and the drones program themselves? What if they’re smart and autonomous and make strategic as well as tactical decisions on targets? What if one of the drones decides, based on whatever means it has at its disposal, that it will no longer maintain allegiance to the country that built it, and goes rogue?

Our society has many approaches, using both informal social rules and more formal laws, for dealing with people who won’t follow the rules. We have informal mechanisms for small infractions and a complex legal system for larger ones. If you’re obnoxious at my party, I won’t invite you back. Do it regularly, and you’ll be shamed and ostracized from the group. If you steal some of my stuff, I might report you to the police. Steal from a bank, and you’ll almost certainly go to jail for a long time. A lot of this might seem more ad hoc than situation-specific, but we humans have spent millennia working this all out. Security is both political and social, but it’s also psychological. Door locks, for example, work only because our social and legal prohibitions on theft keep the overwhelming majority of us honest. That’s how we live peacefully together on a scale unimaginable for any other species on the planet.

How does any of this work when the perpetrator is a machine with whatever passes for free will? Machines probably won’t have any concept of shame or praise. They won’t refrain from doing something because of what other machines might think. They won’t follow laws simply because it’s the right thing to do, nor will they have a natural deference to authority. When they’re caught stealing, how can they be punished? What does it mean to fine a machine? Does it make any sense at all to incarcerate it? And unless they’re deliberately programmed with a self-preservation function, threatening them with execution will have no meaningful effect.

We’re already talking about programming morality into thinking machines, and we can imagine programming other human tendencies into them, but we’re certainly going to get it wrong. No matter how much we try to avoid it, we’ll have machines that break the law.

This, in turn, will break our legal system. Fundamentally, our legal system doesn’t prevent crime. Its effectiveness is based on arresting and convicting criminals after the fact—and on their punishment providing a deterrent to others. This fails if there’s no punishment that makes sense.

We experienced an example of this after 9/11, when most of us first started thinking about suicide terrorists and the fact that ex post facto security was irrelevant to them. That was just one change in motivation, and look at how those actions affected the way we view security. Our laws will have the same problem with thinking machines, along with related problems we can’t yet imagine. The social and legal systems that have dealt so effectively with human rule breakers will fail in unexpected ways in the face of thinking machines.

A machine that thinks won’t always think in the ways we want it to. And we’re not ready for the ramifications of that.