Physicist; director, MIT’s Center for Bits and Atoms; author, FAB

Something about discussion of artificial intelligence seems to displace human intelligence. The extremes of the arguments that AI is either our salvation or damnation are a sure sign of the impending irrelevance of this debate.

Disruptive technologies start as exponentials, which means the first doublings can appear inconsequential because the total numbers are small. Then comes a revolution when the exponential explodes, along with exaggerated claims and warnings to match, but it’s a straight extrapolation of what’s been apparent on a log plot. That’s roughly when growth limits kick in, the exponential crosses over to a sigmoid, and the extreme hopes and fears disappear. That is what we’re now living through with AI. The size of commonsense databases that can be searched, or the number of inference layers that can be trained, or the dimension of feature vectors that can be classified have all been making progress that seems discontinuous to someone who hasn’t been following them.

Notably absent from either side of the debate are the people making many of the most important contributions to this progress. Advances like random matrix theory for compressed sensing, convex relaxations for heuristics for intractable problems, and kernel methods in high-dimensional function approximation are fundamentally changing our understanding of what it means to understand something.

The evaluation of AI has been an exercise in moving goalposts. Chess was conquered by analyzing more moves, Jeopardy! was won by storing more facts, natural-language translation was accomplished by accumulating more examples. These advances suggest that the secret of AI is likely to be that there isn’t a secret. Like so much else in biology, intelligence appears to be a collection of really good hacks. There’s a vanity in thinking that our consciousness is the defining attribute of our uniqueness as a species, but there’s growing empirical evidence from studies of animal behavior and cognition that self-awareness evolved continuously and can be falsified in a number of other species. There’s no reason to accept a mechanistic explanation for the rest of life while declaring one part of it to be off-limits.

We’ve long since become symbiotic with machines for thinking; my ability to do research rests on tools that help me to perceive, remember, reflect, and communicate. Asking whether or not they’re intelligent is as fruitful as asking how I know I exist—amusing philosophically but not testable empirically.

Asking whether or not they’re dangerous is prudent, as it is for any technology. From steam trains to gunpowder to nuclear power to biotechnology, we’ve never not been simultaneously doomed and about to be saved. In each case, salvation has lain in the much more interesting details rather than a simplistic yes/no argument for or against. We ignore the history of AI and everything else if we think this issue will be any different.