Physicist, cosmologist, Arizona State University; author, A Universe from Nothing

There has of late been a great deal of ink devoted to concerns about artificial intelligence and a future world where machines can “think,” where the latter term ranges from simple autonomous decision making to full-fledged self-awareness. I don’t share most of these concerns, and I’m excited by the possibility of experiencing thinking machines, both for the opportunities they’ll offer for potentially improving the human condition and the insights they’ll undoubtedly provide on the nature of consciousness.

First, let’s make one thing clear. Even with the exponential growth in computer storage and processing power over the past forty years, thinking computers will require a digital architecture bearing little resemblance to current computers. Nor are they likely to become competitive with consciousness in the near term. A simple physics thought experiment supports this claim:

Given current power consumption by electronic computers, a computer with the storage and processing capability of the human mind would require more than 10 terawatts of power, within a factor of 2 of the current power consumption of all of humanity. The human brain uses about 10 watts of power. This means a mismatch of a factor of 1012, or a million million. Over the past decade, the doubling time for megaflops/watt has been about three years. Even assuming that Moore’s Law continues unabated, this means it will take about forty doubling times, or about 120 years, to reach a comparable power dissipation. Moreover, each doubling in efficiency requires a relatively radical change in technology, and it’s extremely unlikely that forty such doublings could be achieved without essentially changing the way computers compute.

Ignoring for a moment the logistical challenges, I imagine no other impediment, in principle, to developing a truly self-aware machine. Before this happens, machine decision making will play an ever more important role in our lives. Some people see this as a concern, but it’s been happening for decades. Starting perhaps with the rudimentary computers called elevators, which determine how and when we’ll get to our apartments, we’ve let machines autonomously guide us. We fly on airplanes guided by autopilot, our cars make decisions about when they should be serviced or when tires should be filled, and fully self-driving cars are probably around the corner.

For many, if not most, relatively automatic tasks, machines are clearly much better decision makers than humans, and we should rejoice that they have the potential to make everyday activities safer and more efficient. We haven’t lost control, because we create the conditions and initial algorithms that determine the decision making. I envisage the human/computer interface as like having a helpful partner; the more intelligent machines become, the more helpful they’ll be as partners. Any partnership requires some level of trust and loss of control, but if the benefits often outweigh the losses, we preserve the partnership. If they don’t, we sever it. I see no difference in whether the partner is human or a machine.

One area where we may have to be cautious about partnerships involves the command-and-control infrastructure in modern warfare. Because we have the ability to destroy much of human life on this planet, the idea that intelligent machines might one day control the decision-making apparatus that leads to pushing the big red button—or even launching a less catastrophic attack—is worrisome. This is because when it comes to decision making, we often rely on intuition and interpersonal communication as much as on rational analysis—the Cuban missile crisis is a good example—and we assume that intelligent machines won’t have these capabilities.

However, intuition is the product of experience, and communication is, in the modern world, not restricted to telephones or face-to-face conversations. Once again, intelligent design of systems, with numerous redundancies and safeguards built in, suggests to me that machine decision making, even in the case of violent hostilities, is not necessarily worse than decision making by humans.

So much for possible worries. Let me end with what I think is the most exciting scientific aspect of machine intelligence. Machines currently help us do most of our science, by calculating for us. Beyond simple numeric programming, most graduate students in physics now depend on Mathematica, which does most of the symbolic algebraic manipulation we used to do ourselves when I was a student. But this just scratches the surface.

I’m interested in what machines will focus on when they get to choose the questions as well as the answers. What questions will they choose? What will they find interesting? And will they do physics the same way we do? Surely quantum computers, if they ever become practical, will have a much better “intuitive” understanding of quantum phenomena than we will. Will they be able to make much faster progress unraveling the fundamental laws of nature? When will the first machine win a Nobel Prize? I suspect, as always, that the most interesting questions are the ones we haven’t yet thought of.