METATHINKING

HANS HALVORSON

Professor of philosophy, Princeton University

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

By any reasonable definition of thinking, I suspect that computers do indeed think. But if computers think, then thinking isn’t the unique province of human beings. Is there something else about humans that makes us unique?

Some people would say that what makes human beings unique is the fact that they partake in some sort of divine essence. That may be true, but it’s not terribly informative. If we met an intelligent alien species, how would we decide whether they also have this je ne sais quoi that makes a person? Can we say something more informative about the unique features of persons?

What sets human beings apart from the current generation of thinking machines is that humans can think about thinking, and can reject their current way of thinking if it isn’t working for them.

The most striking example of humans thinking about their own thinking was the discovery of logic by the Stoics and Aristotle. These Greek philosophers asked, “What are the rules we’re supposed to follow when we’re thinking well?” It’s no accident that twentieth-century developments in symbolic logic led to the invention of thinking machines—i.e., computers. Once we became aware of the rules of thinking, it was only a matter of time before we figured out how to make pieces of inanimate matter follow those rules.

Can we take those developments a step further? Can we construct machines that not only think but also engage in metathought—that is, thinking about thinking? One intriguing possibility is that for a machine to think about thinking, it will need to have something like free will. And another intriguing possibility is that we’re on the verge of constructing machines with free will—namely, quantum computers.

What exactly is involved in metathought? I’ll illustrate the idea from the point of view of symbolic logic. In symbolic logic, a “theory” consists of a language L and some rules R that stipulate which sentences can be deduced from which others. There are then two distinct activities you can engage in. You can reason “within the system”—writing proofs in the language L, using the rules R. (Existing computers do precisely this: They think within a system.) Or you can reason “about the system,” asking, for instance, whether there are enough rules to deduce all logical consequences of the theory. This latter activity is typically called metalogic and is a paradigm instance of metathought. It is thinking about the system as opposed to within the system.

But I’m interested in yet another instance of metathought: If you’ve adopted a theory, then you’ve adopted a language and some deduction rules. But you’re free to abandon that language or those rules if you think a different theory would suit your purposes better. We haven’t yet built a machine that can do this sort of thing—i.e., evaluate and choose among systems. Why not? Perhaps choosing among systems requires free will, emotions, goals, or other things not intrinsic to intelligence per se. Perhaps these further abilities are something we don’t have the power to confer on inanimate matter.