CAN SUBMARINES SWIM?

WILLIAM POUNDSTONE

Author, Are You Smart Enough to Work at Google? and Rock Breaks Scissors: A Practical Guide to Outguessing and Outwitting Almost Everybody

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

My favorite Edsger Dijkstra aphorism is this one: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” Yet we keep playing the imitation game—asking how closely machine intelligence can duplicate our own intelligence, as if that were the real point. Of course, once you imagine machines with humanlike feelings and free will, you can conceive of misbehaving machine intelligence—the “AI as Frankenstein’s monster” idea. This notion is in the midst of a revival, and I started out thinking it was overblown. Lately I’ve concluded it’s not.

Here’s the case for overblown. Machine intelligence can go in so many directions that it’s a failure of imagination to focus on humanlike directions. Most of the early futurist conceptions of machine intelligence were wildly off base, because computers have been most successful at doing what humans can’t do well. Machines are incredibly good at sorting lists. Maybe that sounds boring, but think of how efficient sorting has changed the world.

In answer to some of the questions brought up here, it’s far from clear that there will ever be a practical reason for future machines to have emotions and inner dialog; to pass for human under extended interrogation; to desire, and be able to benefit from, legal and civil rights. They’re machines and they can be anything we design them to be.

But some people will want anthropomorphic machine intelligence. How many videos of Japanese robots have you seen? Honda, Sony, and Hitachi already expend substantial resources in making cute AI that has no concrete value beyond corporate publicity. They do this for no better reason than that tech enthusiasts have grown up seeing robots and intelligent computers in movies.

Almost anything that’s conceived—that’s physically possible and reasonably cheap—is realized. So humanlike machine intelligence is a meme with manifest destiny, regardless of practical value. This could entail nice machines that think, obeying Asimov’s laws. But once the technology is out there, it will get ever cheaper and filter down to hobbyists, hackers, and “machine rights” organizations. There will be interest in creating machines with will, whose interests aren’t our own. And that’s without considering what machines that terrorists, rogue regimes, and intelligence agencies of the less roguish nations may devise. I think the notion of Frankensteinian AI—AI that turns on its creators—is worth taking seriously.