THINKING FROM THE INSIDE OR THE OUTSIDE?

MATTHEW D. LIEBERMAN

Professor of psychology, psychiatry, and biobehavioral sciences, UCLA; author, Social: Why Our Brains Are Wired to Connect

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Will machines someday be able to think? And if so, should we worry about Schwarzenegger-looking machines with designs on eliminating humans from the planet because their superior decision making would make this an obvious plan of action? As much as I love science fiction, I can’t say I’m too worried about a robot apocalypse. I’ve occasionally worried about what it means to say that a machine can think. I’d either say that we’ve been building thinking machines for centuries or I’d argue that it’s a dubious proposition, unlikely to ever come true. What it comes down to is whether we define thinking from a third-person or a first-person perspective. Is thinking something we can identify as occurring in systems like people or machines but not in ham sandwiches, from the outside, based on their behavior, or is thinking the kind of thing we know about from the inside, because we know what thinking feels like?

The standard definition of thinking implies that it occurs if informational inputs are processed, transformed, or integrated into some type of useful output. Solving math equations is one of the simplest straightforward kinds of thinking. If you see three of something and then four more of that something and you conclude that there are seven of those things overall, you’ve done a bit of mathematical thinking. So did Pascal’s first motorized calculator in 1642. Those calculators needed human input to get the three and the four but then could do the integration of both those numbers to yield seven. Today we could cut out the middleman by building a computer that has visual sensors and object-recognition software that could easily detect the three things and the four things and complete the addition on its own.

Is that a thinking machine? If so, then you’d probably have to admit that most of your internal organs are also thinking. Your kidneys, spleen, and intestines all take inputs that could be called information and transform these inputs into outputs. Even your brain, as seen from a third-person perspective, doesn’t deal with information, strictly speaking. Its currency is electrical and chemical transmissions that neuroscientists work hard to redescribe in terms of informational value. If pattern X of electrical and chemical activity occurs as a distributed pattern in the brain when we think of “three,” is that pattern the same as three in any intrinsic sense? It’s just a convenient equivalence we scientists use. Electrical impulses in the brain are no more intrinsically “information” or “thinking” than what goes on in our kidneys, calculators, or any of the countless other physical systems that convert inputs to outputs. We can call this “thinking” if we like, but if so, it’s third-person thinking—thinking that can be identified from the outside—and it’s far more common than we’d like to admit. Certainly the character of human or computer information transformation may be more sophisticated than other naturally occurring forms of thinking, but I’m unconvinced, from a third-person perspective, that they’re qualitatively different.

So do humans think only in the most trivial sense? From a third-person perspective, I’d say yes. From a first-person perspective, the story has a different punch line. Around the time Pascal was creating man-made thinking machines, Descartes wrote those famous words cogito ergo sum, (which, by the way, were cribbed from St. Augustine’s writings 1,000 years earlier). I don’t believe Descartes had it quite right, but with a slight modification we can make his philosophical bumper sticker into something both true and relevant to this debate about thinking machines.

While “I think, therefore I am” might have a touch too much bravado, “I think, therefore there’s thinking” is entirely defensible. When I add three and four, I might have a conscious experience of doing so, and the way I characterize this conscious experience is as a moment of thinking, distinct from my experience of being lost in a movie or overcome by emotion. I have certain experiences that feel like thinking, and they tend to occur when I’m presented with a math problem or a logic puzzle or a choice of whether to take the one marshmallow or wait for two.

The feeling of thinking might seem inconsequential, adding nothing to thinking’s computational aspects—the neural firing that underpins the transforming of inputs to outputs. But consider this: Countless different things in the physical world look like they’re transforming inputs that could be described as information into outputs that could also be described as information. To our knowledge, humans and only humans seem to have an experience of doing so. This is first-person thinking, and it’s critical not to confuse it with third-person thinking.

Why does first-person thinking matter? First, it’s intrinsic. There’s no way to redescribe the ongoing experience of thought as something other than thought. But whether we describe kidneys, calculators, or electrical activity in the brain observed from a third-person perspective as thought is arbitrary: We can do it, but we can also choose not to. The only reason we think our brain is doing a special kind of thinking is because it seems linked to our first-person kind of thinking as well. But third-person thinking isn’t intrinsic.

Second, and more practical, our experience of our thinking shapes what kinds of thinking we’ll do next. Did it feel effortful, boring, rewarding, inspiring to think those last thoughts? That will determine whether and how often we engage in thinking of a certain kind. I’m not suggesting that our first-person experiences don’t also have neural correlates. But no scientist or philosopher can tell you why those neural processes, behaving as they do, necessarily give rise to those experiences, or to any experience at all. It’s one of the three great mysteries of the universe (that stuff exists, that life exists, that experience exists).

Will we increasingly create machines that can produce input-output patterns replicating human input-output patterns? Unquestionably. Will we create machines that go beyond this to produce useful algorithms and data transformations that humans could carry out on their own and which would improve the quality of human life? We already are, and we’ll do so more and more. Will we create machines that can do first-person thinking—can experience their own thoughts? I don’t know, but I’m not terribly confident we will. Solving this problem might be the most magnificent achievement of humankind, but we must start by recognizing that it’s indeed a problem. I’d love to see first-person thinking machines, but until we begin to figure out what makes us first-person thinking machines, everything else is just a glorified calculator.