Research associate and lecturer, Psychology Department, Harvard University; author, Alex & Me

While machines are terrific at computing, they’re not very good at actual thinking. Machines have an endless supply of grit and perseverance, and, as others have said, will effortlessly crunch out the answer to a complicated mathematical problem or direct you through traffic in an unknown city, all by use of the algorithms and programs installed by humans. But what do machines lack?

Machines (at least so far, and I don’t think this will change with a Singularity) lack vision. And I don’t mean sight. Machines do not devise the next new killer app on their own. Machines don’t decide to explore distant galaxies—they do a terrific job once we send them, but that’s a different story. Machines are certainly better than the average person at solving problems in calculus and quantum mechanics—but machines don’t have the vision to see the need for such constructs in the first place. Machines can beat humans at chess, but they have yet to design the type of mind game that will intrigue humans for centuries. Machines can see statistical regularities my feeble brain will miss, but they can’t make the insightful leap that connects disparate sets of data to devise a new field.

I’m not terribly concerned about machines that compute. I’ll deal with the frustration of my browser in exchange for a smart refrigerator that, based on tracking RFID codes of what comes in and out, texts me to buy cream on my way home (hint to those working on such a system . . . sooner rather than later!). I like having my computer underline words it doesn’t recognize, and I’ll deal with the frustration of having to ignore its comments on “phylogenetic” in exchange for catching my typo on a common term (in fact, it won’t let me misspell a word here to make a point). But these examples show that just because a machine goes through the motions of what looks like thinking doesn’t mean it’s actually engaging in that behavior—or at least, in one equivalent to the human process.

I’m reminded of one of the earliest studies to train apes to use “language”—in this case, to manipulate plastic chips to answer a number of questions. The system was replicated with college students, who—not surprisingly—did exceptionally well but, when asked what they’d been trained to do, claimed that they’d solved some interesting puzzles and had no idea they were being taught a language. Much debate ensued, and much was learned and put into practice in subsequent studies, so that several nonhuman subjects did eventually understand the referential meaning of the various symbols they were taught to use, and we did learn a lot about ape intelligence from the original methodology. The point, however, is that what initially looked like a complicated linguistic system needed a lot more work before it became more than a series of (relatively) simple paired associations.

My concern therefore is not about thinking machines but about a complacent society—one that might give up on its visionaries in exchange merely for getting rid of drudgery. Humans need to take advantage of the cognitive capacity that’s freed when machines take over the scut work—and be thankful for that freedom and use that freedom, channeling all that ability into the hard work of solving pressing problems that need insightful, visionary leaps.