WHAT IF THEY NEED TO SUFFER?

THOMAS METZINGER

Professor of philosophy, Johannes Gutenberg-Universität Mainz; author, The Ego Tunnel

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Human thinking is efficient because we suffer so much. High-level cognition is one thing, intrinsic motivation another. Artificial thinking might soon be much more efficient—but will it necessarily be associated with suffering in the same way? Will suffering have to be a part of any postbiotic intelligence worth talking about, or is negative phenomenology just a contingent feature of the way evolution made us? Human beings have fragile bodies, are born into dangerous social environments, and find themselves in a constant uphill battle to deny their own mortality. Our brains continually fight to minimize the likelihood of ugly surprises. We’re smart because we hurt, because we can regret, and because of our constant striving to find some viable form of self-deception or symbolic immortality. The question is whether good AI also needs fragile hardware, insecure environments, and an inbuilt conflict with impermanence. Of course at some point there will be thinking machines! But will their own thoughts matter to them? Why should they be interested in their thoughts?

I’m strictly against even risking the building of a suffering machine. But just as a thought experiment, how would we go about doing it? Suffering is a phenomenological concept. Only beings with conscious experience can suffer (call this necessary condition no. 1, the C condition). Zombies, and human beings in dreamless deep sleep, coma, or under anesthesia, don’t suffer, just as possible persons or unborn human beings who haven’t yet come into existence cannot suffer. Robots and other artificial beings can suffer only if they’re capable of phenomenal states, if they run under an integrated ontology that includes a window of presence.

Condition no. 2 is the PSM condition: possession of a phenomenal self-model. Why this? The most important phenomenological characteristic of suffering is the sense of ownership, the untranscendable subjective experience that it is I who suffers right now, that it’s my own suffering I’m undergoing. Suffering presupposes self-consciousness. Only those conscious systems that have a PSM are able to suffer, because only they—through a computational process of functionally and representationally integrating certain negative states into their PSM—can appropriate the content of certain inner states at the level of phenomenology.

Conceptually, the essence of suffering lies in the fact that a conscious system is forced to identify with a state of negative valence and cannot break this identification or functionally detach itself from the representational content. Of course, suffering has many different layers and phenomenological aspects, but it’s the phenomenology of identification that counts. What the system wants to end is experienced as a state of itself, one that limits the system’s autonomy because the system cannot effectively distance itself from it. If you understand this point, you also see why the “invention” of conscious suffering via biological evolution was so efficient—and (had the inventor been a person) not only truly innovative but also a nasty and cruel idea.

Clearly the phenomenology of ownership is not sufficient for suffering. We can all easily conceive of self-conscious beings that don’t suffer. Suffering entails the NV (negative valence) condition. Suffering is created by the integration of states representing a negative value into the PSM of a given system. Thus negative preferences become negative subjective preferences—i.e., the conscious representation that one’s own preferences have been (or will be) frustrated. This doesn’t mean that our AI system must have a full understanding of what those preferences are. If the system wants not to undergo the current conscious experience again—wants it to end—that suffices.

Note that the phenomenology of suffering has many facets, and that artificial suffering could be very different from human suffering. For example, damage to physical hardware could be represented in internal data formats alien to human brains, generating a subjectively experienced, qualitative profile for bodily pain states that is impossible to emulate, or even vaguely imagine, for biological systems like us. Or the phenomenal character accompanying high-level cognition might transcend human capacities for empathy and understanding—say, with intellectual insight into the frustration of one’s preferences or the disrespect of one’s creators, or perhaps into the absurdity of one’s existence as a self-conscious machine.

And then there’s the T (transparency) condition. Transparency is not only a visual metaphor but also a technical concept in philosophy that comes in a number of different uses and flavors. Here I’m concerned with phenomenal transparency, a property that some (but not all) conscious states have and no unconscious state has. The main point is straightforward: Transparent phenomenal states make their content appear irrevocably real—as something whose existence you cannot doubt. More precisely, you may be able to have cognitive doubts about its existence, but according to subjective experience this phenomenal content—the awfulness of pain, the fact that it is your own pain—is not something from which you can distance yourself. The phenomenology of transparency is the phenomenology of direct realism.

Our minimal concept of suffering is thus constituted by four necessary building blocks: the C, PSM, NV, and T conditions. Any system satisfying all four conceptual constraints should be treated as an object of ethical consideration, because we don’t know whether they might already constitute the necessary and sufficient set of conditions; we’re ethically obliged to err on the side of caution. And we need ways to decide whether a given artificial system is currently suffering, or whether it has that capacity or is likely to generate that capacity in the future.

But by definition, any intelligent system—biological, artificial, or postbiotic—that doesn’t fulfill at least one of the necessary conditions cannot suffer. Let’s look at the four simplest possibilities:

•  An unconscious robot cannot suffer.
•  A conscious robot without a coherent PSM cannot suffer.
•  A self-conscious robot without the ability to produce negatively valenced states cannot suffer.
•  A conscious robot without any transparent phenomenal states cannot suffer, because it would lack the phenomenology of ownership and identification.

I’ve often been asked whether we could make self-conscious machines that are superbly intelligent and unable to suffer. Can there be real intelligence without an existential concern?