Cognitive scientist, MIT Media Lab, Harvard Program for Evolutionary Dynamics

Centuries ago, some philosophers began to see the human mind as a mechanism, a notion that (unlike the mechanist interpretation of the universe) is hotly contested until this day. With the formalization of computation, the mechanist perspective received a new theoretical foundation: The notion of the mind as an information-processing machine provided an epistemology and methods to understand the nature of our mind by re-creating it. Sixty years ago, some of the pioneers of the new computational concepts got together and created artificial intelligence (AI) as a new discipline to study the mind.

AI has probably been the most productive technological paradigm of the Information Age, but despite an impressive string of initial successes, it failed to deliver on its promise. It turned into an engineering field, creating useful abstractions and narrowly focused applications. Today this seems to have changed again. Better hardware, novel learning and representation paradigms inspired by neuroscience, and incremental progress within AI itself have led to a slew of landmark successes. Breakthroughs in image recognition, data analysis, autonomous learning, and the construction of scalable systems have spawned applications that seemed impossible a decade ago. With renewed support from private and public funding, AI researchers now turn toward systems that display imagination, creativity, and intrinsic motivation, and might acquire language skills and knowledge somewhat as humans do. The discipline of AI seems to have come full circle.

The new generation of AI systems is still far from being able to replicate the generality of human intelligence, and it’s hard to know how long that will take. But it seems increasingly clear that there’s no fundamental barrier on the path to humanlike intelligent systems. We’ve started to pry the mind apart into a set of puzzle blocks, and each part of the puzzle looks eminently solvable. But if we put all these blocks together into a comprehensive, working model, we won’t just end up with humanlike intelligence.

Unlike biological systems, technology scales. The speed of the fastest birds didn’t turn out to be a limit to airplanes, and artificial minds will be faster, more accurate, more alert, more aware and comprehensive than their human counterparts. AI will replace human decision makers, administrators, inventors, engineers, scientists, military strategists, designers, advertisers, and of course AI programmers. At that point, artificial intelligences can become self-perfecting and radically outperform human minds in every respect. I don’t think this will happen in an instant (in which case, it only matters who’s got the first one). Before we have generally intelligent, self-perfecting AI, we’ll see many variants of task-specific, nongeneral AI, to which we can adapt. Obviously that’s already happening.

When generally intelligent machines become feasible, implementing them will be relatively cheap, and every large corporation, every government, and every large organization will find itself forced to build and use them or be threatened with extinction.

What will happen when AIs take on a mind of their own?

Intelligence is a toolbox we use to reach a given goal, but strictly speaking, it doesn’t entail motives and goals by itself. Human desires for self-preservation, power, and experience aren’t the result of human intelligence but of primate evolution, transported into an age of stimulus amplification, mass interaction, symbolic gratification, and narrative overload. The motives of our artificial minds will (at least initially) be those of the organizations, corporations, groups, and individuals that make use of their intelligence. If the business model of a company is not benevolent, then AI has the potential to make that company truly dangerous. Likewise, if an organization aims at improving the human condition, then AI might make that organization more efficient in realizing its benevolent potential.

The motivation of our AIs will stem from the existing building blocks of our society; every society will get the AI it deserves.

Our current societies aren’t well designed in this regard. Our modes of production are unsustainable and our resource allocation is wasteful—and our administrative institutions are ill-suited to address those problems. Our civilization is an aggressively growing entropy pump that destroys more at its borders than it creates at its center.

AI can make these destructive tendencies more efficient, and thus more disastrous, but it could equally well help us solve the existential challenges of our civilization. Building benevolent AI is closely connected to the task of building a society that supplies the right motivations to its building blocks. The advent of the new Age of Thinking Machines may force us to fundamentally rethink our institutions of governance, allocation, and production.