NO “I” AND NO CAPACITY FOR MALICE

ROY BAUMEISTER

Francis Eppes Eminent Scholar and head, social psychology graduate program, Florida State University; coauthor (with John Tierney), Willpower: Rediscovering the Greatest Human Strength

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

So-called thinking machines are extensions of the human mind. They don’t exist in nature. They’re not created by evolution, they’re created by human minds from blueprints and theories. The human mind figures out how to make tools that enable it to work better. A computer is one of the best tools.

Life mostly seeks to sustain life, so living things care about what happens. The computer, not alive and not designed by evolution, doesn’t care about survival or reproduction—in fact, it doesn’t care about anything. Computers aren’t dangerous in the way snakes and hired killers are dangerous. Although many movies explore horror fantasies of computers turning malicious, real computers lack the capacity for malice.

A thinking machine that serves a human is an asset, not a threat. Only if it became an independent agent, acting on its own—a tool rebelling against its user’s wishes—could it become a threat. For that, a computer would need to do more than think. It would need to make choices that could violate the programmer’s wishes. That would require something akin to free will.

What would the computer on your desk or lap have to do so that you’d say it had free will (at least in whatever sense humans have free will)? Certainly it would have to be able to reprogram itself; otherwise it would just be carrying out built-in instructions. And the reprogramming would have to be done in a way that was flexible, not programmed in advance. But where would that come from? In humans, the agent comes to exist because it serves the motivational system: It helps you get what you need and want.

Humans, like other animals, were designed by evolution, and so the beginnings of subjectivity come with wanting and liking the things that enable life to continue, like food and sex. The agent serves that, choosing actions that obtain those life-sustaining things. And thinking helps the agent make better choices.

Human thinking thus serves to prolong life—by helping you decide who to trust, what to eat, how to make a living, who to marry. A thinking machine isn’t motivated by any innate drive to sustain its life. The computer may be able to process more information faster than a human brain can, but there’s no “I” in the computer, because it doesn’t begin with wanting things that enable it to sustain life. If computers had an urge to prolong their own existence, they’d probably focus their ire mainly on the computer industry so as to stop progress. Because the main threat to a computer’s continued existence arises when newer, better computers make it obsolete.