MACHINES THAT THINK ARE IN THE MOVIES

ROGER SCHANK

Psychologist and computer scientist, Engines for Education, Inc.; author, Teaching Minds: How Cognitive Science Can Save Our Schools

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Machines cannot think. They’re not going to think anytime soon. They may increasingly do more interesting things, but the idea that we need to worry about them, regulate them, or grant them civil rights is just plain silly.

The overpromising of “expert systems” in the 1980s killed off serious funding for the kind of AI that tries to build virtual humans. Very few people are working in this area today. But, according to the media, we must be very afraid.

We have all been watching too many movies.

There are two choices when you work on AI. One is the “let’s copy humans” method. The other is the “let’s do some really fast statistics-based computing” method. As an example, early chess-playing programs tried to outcompute those they played against, but human players have strategies, and anticipation of an opponent’s thinking is also part of chess playing. When the “outcompute them” strategy didn’t work, AI people started watching what expert players did and started to imitate that. The “outcompute them” strategy is more in vogue today. We can call both of these methodologies “AI” if we like, but neither will lead to machines that create a new society.

The “outcompute them” strategy is not frightening, because the computer really has no idea what it’s doing. It can count things fast without understanding what it’s counting. It has counting algorithms—that’s it. We saw this with IBM’s Watson program on Jeopardy!

One Jeopardy! question was, “It was the anatomical oddity of U.S. gymnast George Eyser, who won a gold medal on the parallel bars in 1904.”

A human opponent answered that Eyser was missing a hand (wrong). And Watson answered, “What is a leg?” Watson lost too, for failing to note that the leg was “missing.”

Try a Google search on “Gymnast Eyser.” Wikipedia comes up first with a long article about him. Watson depends on Google. If Jeopardy! contestants could use Google, they’d do better than Watson. Watson can translate “anatomical” into “body part,” and Watson knows the names of the body parts. Watson doesn’t know what an “oddity” is, however. Watson would not have known that a gymnast without a leg was weird. If the question had been “What was weird about Eyser?,” humans would have done fine. Watson would not have found “weird” in the Wikipedia article nor understood what gymnasts do, nor why anyone would care. Try Googling “weird” and “Eyser” and see what you get. Keyword search is not thinking, nor anything like thinking.

If we asked Watson why a disabled person would perform in the Olympics, Watson would have no idea what was being asked. It wouldn’t have understood the question, much less have been able to find the answer. Number crunching can get you only so far. Intelligence, artificial or otherwise, requires knowing why things happen and what emotions they stir up, and being able to predict possible consequences of actions. Watson can’t do any of that. Thinking and searching text are not the same thing.

The human mind is complicated. Those of us on the “let’s copy humans” side of AI spend our time thinking about what humans can do. Many scientists think about this, but basically we don’t know that much about how the mind works. AI people try to build models of the parts we do understand. About how language is processed or how learning works, we know a little; about consciousness or memory retrieval, not so much.

As an example: I’m working on a computer that mimics human memory organization. The idea is to produce a computer that can, as a good friend would, tell you just the right story at the right time. To do this, we have collected (in video) thousands of stories (about defense, about drug research, about medicine, about computer programming, etc.). When someone’s trying to do something or find something out, our program can chime in with a story it’s reminded of. Is this AI? Of course it is. Is it a computer that thinks? Not exactly.

Why not?

In order to accomplish this task, we must interview experts and then we must index the meaning of the stories they tell according to the points they make, the ideas they refute, the goals they talk about achieving, and the problems they experienced in achieving them. Only people can do this. The computer can match the index assigned to other indices, such as those in another story it has, or indices from user queries, or from an analysis of a situation it knows the user is in. The computer can come up with a very good story to tell, just in time. But of course it doesn’t know what it’s saying. It can simply find the best story to tell.

Is this AI? I think it is. Does it copy how humans index stories in memory? We’ve been studying how people do this for a long time, and we think it does. Should you be afraid of this “thinking” program?

This is where I lose it about the fear of AI. There’s nothing we can produce that anyone should be frightened of. If we could actually build a mobile intelligent machine that could walk, talk, and chew gum, the first uses of that machine would certainly not be to take over the world or form a new society of robots. A much simpler use would be as a household robot. Everyone wants a personal servant. The movies depict robot servants (although usually stupidly) because they’re funny and seem like cool things to have.

Why don’t we have them? Because having a useful servant entails having something that understands when you tell it something, that learns from its mistakes, that can navigate your home successfully, and that doesn’t break things, act annoyingly, and so on (all of which is way beyond anything we can build). Don’t worry about it chatting up other robot servants and forming a union. There would be no reason to build such a capability into a servant. Real servants are annoying sometimes because they’re people, with human needs. Computers don’t have such needs.

We’re nowhere near close to creating this kind of machine. To do so would require a deep understanding of human interaction. It would have to understand “Robot, you overcooked that again,” or “Robot, the kids hated that song you sang them.” Everyone should stop worrying and start rooting for some nice AI stuff we can all enjoy.