WILL THEY THINK ABOUT THEMSELVES?

JESSICA L. TRACY

Associate professor of psychology, University of British Columbia

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

KRISTIN LAURIN

Assistant professor of organizational behavior, Stanford Graduate School of Business

The first question that arises as we think about machines that think is how much those machines will, eventually, be like us. This comes down to a question of self. Will thinking machines ever evolve to the point of having a sense of self resembling that of humans? We’re (probably) the only species capable of self-consciously thinking about who we are—of not only knowing our selves but also being able to evaluate those selves from a unique internal perspective.

Could machines ever develop that kind of self? Might they experience the same evolutionary forces that made human selves adaptive? These include the need to get along with others, attain status, and make sure others like us and want to include us in their social groups. As a human being, if you want to succeed at group living, it helps to have a self you’re motivated to protect and enhance; this is what prompts you to become the kind of person others like, respect, and grant power to, all of which ultimately enhances your chances of surviving long enough to reproduce. Your self is also what allows you to understand that others have selves of their own—a recognition required for empathy and cooperation, two prerequisites for social living.

Will machines ever experience those kinds of evolutionary forces? Let’s start with the assumption that machines will someday control their own access to resources they need, like electricity and Internet bandwidth (rather than having this access controlled by humans) and will be responsible for their own “life” and “death” outcomes (rather than having these outcomes controlled by humans). From there, we can next assume that the machines that survive in this environment will be those programmed to hold at least one basic self-related goal: that of increasing their own efficiency or productivity. This goal would be akin to the human gene’s goal of reproducing itself; in both cases, the goal drives behaviors oriented toward boosting fitness—either of the individual possessing the gene or the machine running the program.

Under those circumstances, machines would be motivated to compete with one another for a limited pool of resources. Those who can form alliances and cooperate—that is, sacrifice their own goals for others, in exchange for future benefits—will be most successful in this competition. So it’s possible to imagine a future in which it would be adaptive for machines to become social beings that need to form relationships with other machines and therefore develop humanlike selves.

However, there’s a major caveat to this assumption. Any sociality that comes to exist among thinking machines would be qualitatively different from that of humans, for one critical reason: Machines can literally read one another’s minds. Unlike humans, they don’t need the secondary—and often deeply flawed—interpretative form of empathy we rely on. They can directly know the contents of one another’s minds. This would make getting along with others a notably different process. Despite the critical importance of our many social connections, in the end we humans are each fundamentally alone. Any connection we feel with another’s mind is metaphorical; we cannot know for certain what goes on in someone else’s head—at least not in the same way we know our own thoughts. This constraint doesn’t exist for machines. Computers can directly access one another’s inner “thoughts,” and there’s no reason that one machine reading another’s hardware and software wouldn’t come to know, in exactly the self-knowing sense, what it means to be that other machine. Once that happens, each machine is no longer an entirely separate self in the human sense. At that point—when machines share minds—any self they have would necessarily become collective.

Yes, machines could easily keep track of the sources of various bits of information they obtain and use this tracking to distinguish between “me” and other machines. But once an individual understands another at the level that a program-reading machine can, the distinction between self and other becomes largely irrelevant. If I download all the contents of your PC to an external hard drive and plug that into my PC, don’t those contents become part of my PC’s self? If I establish a permanent connection between our two PCs, such that all information on one is shared with the other, do they continue to be two separate PCs? Or are they at that point a single machine? Humans can never obtain the contents of another’s mind in this way; despite our best efforts to become close to certain others, there’s always a skull-thick boundary separating their minds from ours. But for machines, self-expansion is not only possible but may be the most likely outcome of a programmed goal to increase fitness in a world where groups of individuals must compete over or share resources.

To the extent that machines come to have selves, they’ll be so collective that they may instigate a new level of sociality not experienced by humans—perhaps more like the eusociality of ants, whose extreme genetic relatedness makes sacrificing oneself for a family member adaptive. Nonetheless, the fact that any self at all is a possibility in machines is a reason to hope. The self is what allows us to feel empathy, so in machines it could be the thing that forces them to care about us. Self-awareness might motivate machines to protect (or at least not harm) a species that, despite being several orders of magnitude less intelligent than they are, shares the thing that makes them care about who they are.

Of course, it’s questionable whether we can hold out greater hope for the empathy of supersmart machines than what we currently see in many humans.