第145页 | What to Think About Machines That Think | 阅读 ‧ 电子书库

同步阅读进度,多语言翻译,过滤屏幕蓝光,评论分享,更多完整功能,更好读书体验,试试 阅读 ‧ 电子书库

MACHINES WON’T BE THINKING ANYTIME SOON

GARY MARCUS

Professor of psychology, New York University; author, Guitar Zero: The New Musician and the Science of Learning

What I think about machines thinking is that it won’t happen anytime soon. I don’t imagine there’s any in-principle limitation; carbon isn’t magical, and I suspect silicon will do just fine. But lately the hype has gotten way ahead of reality. Learning to detect a cat in full frontal position after 10 million frames drawn from Internet videos is a long way from understanding what a cat is, and anybody who thinks that we’ve “solved” AI doesn’t realize the limitations of the current technology.

To be sure, there have been exponential advances in narrow-engineering applications of artificial intelligence, such as playing chess, calculating travel routes, or translating texts in rough fashion, but there’s been scarcely more than linear progress in five decades of working toward strong AI. For example, the different flavors of intelligent personal assistants available on your smartphone are only modestly better than Eliza, an early example of primitive natural-language-processing from the mid-1960s. We still have no machine that can, for instance, read all that the Web has to say about war and plot a decent campaign, nor do we even have an open-ended AI system that can figure out how to write an essay to pass a freshman composition class or an eighth-grade science exam.

Why so little progress, despite the spectacular increases in memory and CPU power? When Marvin Minksy and Gerald Sussman attempted the construction of a visual system in 1966, did they envision superclusters or gigabytes that would sit in your pocket? Why haven’t advances of this nature led us straight to machines with the flexibility of human minds? Consider three possibilities:

1.  We’ll solve AI (and this will finally produce machines that can think) as soon as our machines get bigger and faster.2.  We’ll solve AI when our learning algorithms get better. Or when we have even Bigger Data.3.  We’ll solve AI when we finally understand what it is that evolution did in the construction of the human brain.

Ray Kurzweil and many others seem to put their weight on option (1), sufficient CPU power. But how many doublings in CPU power would be enough? Have all the doublings so far gotten us closer to true intelligence? Or just to narrow agents that can give us movie times?

In option (2), Big Data and better learning algorithms have so far got us only to innovations like machine translations, which provide fast but mediocre translations piggybacking onto the prior work of human translators, without any semblance of thinking. The machine translation engines available today cannot, for example, answer basic queries about what they just translated. Think of them more as idiot savants than fluent thinkers.

My bet is on option (3). Evolution seems to have endowed us with a powerful set of priors (or what Noam Chomsky or Steven Pinker might call innate constraints) that allow us to make sense of the world based on limited data. Big Efforts with Big Data aren’t really getting us closer to understanding those priors, so while we’re getting better and better at the sort of problem that can be narrowly engineered (like driving on well-mapped roads), we’re not getting appreciably closer to machines with commonsense understanding or the ability to process natural language. Or, more to the point of this year’s Edge Question, to machines that actually think.

请支持我们,让我们可以支付服务器费用。
使用微信支付打赏


上一页 · 目录下一页


下载 · 书页 · 阅读 ‧ 电子书库