THEY’LL DO MORE GOOD THAN HARM

MARK PAGEL

Professor of evolutionary biology, University of Reading, U.K.; external professor, science board, Santa Fe Institute; author, Wired for Culture: Origins of the Human Social Mind

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

There’s no reason to believe that as machines become more intelligent—and intelligence such as ours is still little more than a pipe dream—they’ll become evil, manipulative, self-interested, or in general a threat to humans. Self-interest is a property of things that “want” to stay alive (or more accurately, that want to reproduce), and this isn’t a natural property of machines. Computers don’t mind, much less worry about, being switched off.

So full-blown artificial intelligence will not spell “the end of the human race.” It’s not an “existential threat” to humans (digression: This now common use of existential is incorrect). We’re not approaching some ill-defined apocalyptic Singularity, and the development of AI will not be “the last great event in human history”—all claims that have recently been made about machines that can think.

In fact, as we design machines that get better and better at thinking, they can be put to uses that will do us far more good than harm. Machines are good at long monotonous tasks like monitoring risks; they’re good at assembling information to reach decisions; they’re good at analyzing data for patterns and trends; they can arrange for us to use scarce or polluting resources more efficiently; they react faster than humans; they’re good at operating other machines; they don’t get tired or afraid; and they can even look after their human owners, as in the form of smartphones with applications like Siri and Cortana or the various GPS route-planning devices most people have in their cars.

Being inherently selfless rather than self-interested, machines can easily be taught to cooperate, and without fear that some of them will take advantage of other machines’ goodwill. Groups (packs, teams, bands, or whatever collective noun will eventually emerge—I prefer the ironic jams) of networked and cooperating driverless cars will drive safely nose-to-tail at high speeds: They won’t nod off, they won’t get angry, they can inform one another of their actions and conditions elsewhere, and they’ll make better use of the motorways, which now are mostly unoccupied space (owing to humans’ unremarkable reaction times). They’ll do this happily, and without expecting reward, while we eat our lunch, watch a film, or read the newspaper. Our children will rightly wonder why anyone ever drove a car.

There’s a risk that we will, and perhaps already have, become dangerously dependent on machines, but this says more about us than about them. Equally, machines can be made to do harm, but again, this says more about their human inventors and masters than about the machines. Along those lines, there’s a strand of human influence on machines that we should monitor closely, and that is introducing the possibility of death. If machines have to compete for resources (like electricity or gasoline) to survive, and they have some ability to alter their behaviors, they could become self-interested.

Were we to allow or even encourage self-interest to emerge in machines, they could eventually become like us: capable of repressive or, worse, unspeakable acts toward humans and toward one another. But this wouldn’t happen overnight; it’s something we’d have to set in motion. It has nothing to do with intelligence (some viruses do unspeakable things to humans) and, again, says more about what we do with machines than about the machines themselves.

So it’s not thinking machines or AI per se that we should worry about, but people. Machines that can think are neither for us nor against us and have no built-in predilections to be one over the other. To think otherwise is to confuse intelligence with aspiration and its attendant emotions. We have both, because we’re evolved and replicating (reproducing) organisms, selected to stay alive in often cutthroat competition with others. But aspiration isn’t a necessary part of intelligence, even if it provides a useful platform on which intelligence can evolve.

Indeed, we should look forward to the day when machines can transcend mere problem solving and become imaginative and innovative—still a long, long way off but surely a feature of true intelligence—because this is something humans aren’t very good at but will probably need more of in the coming decades than at any time in our history.