Theoretical physicist

As machines rise to sentience—and they will—they’ll compete in Darwinian fashion for resources, survival, and propagation. This scenario seems like a nightmare to most people, with fears stoked by movies of terminator robots and computer-directed nuclear destruction, but the reality will likely be different. We already have nonhuman autonomous entities operating in our society with the legal rights of humans. These entities—corporations—act to fulfill their missions without love or care for human beings.

Corporations are sociopaths, and they’ve done great harm, but they’ve also been a great force for good in the world, competing in the capitalist arena by providing products and services, and (for the most part) obeying laws. Corporations are ostensibly run by their boards, composed of humans, but these boards are in the habit of delegating power, and as computers become more capable of running corporations they’ll get more of that power. The corporate boards of the future will be circuit boards.

Although extrapolation is accurate only for a limited time, experts mostly agree that Moore’s Law will continue to hold for many years and computers will become increasingly powerful, possibly exceeding the computational abilities of the human brain before the middle of this century. Even if no large leaps in understanding intelligence algorithmically are made, computers will eventually be able to simulate the workings of a human brain (itself a biological machine) and attain superhuman intelligence using brute-force computation. However, although computational power is increasing exponentially, supercomputer costs and electrical-power efficiency aren’t keeping pace. The first machines capable of superhuman intelligence will be expensive and require enormous amounts of electrical power—they’ll need to earn money to survive.

The environmental playing field for superintelligent machines is already in place; in fact, the Darwinian game is afoot. The trading machines of investment banks are competing, for serious money, on the world’s exchanges, having put human day traders out of business years ago. As computers and algorithms advance beyond investing and accounting, machines will be making more and more corporate decisions, including strategic decisions, until they’re running the world. This won’t be a bad thing, because the machines will play by the rules of our current capitalist society and create products and advances of great benefit to humanity, supporting their operating costs. Intelligent machines will be better able to cater to humans than humans are, and will be motivated to do so, at least for a while.

Computers share knowledge much more easily than humans do, and they can keep that knowledge longer, becoming wiser than humans. Many forward-thinking companies already see this writing on the wall and are luring the best computer scientists out of academia with better pay and advanced hardware. A world with superintelligent-machine-run corporations won’t be that different for humans than it is now; it will just be better, with more advanced goods and services available for very little cost and more leisure time available to those who want it.

Of course, the first superintelligent machines probably won’t be corporate; they’ll be operated by governments. And this will be much more hazardous. Governments are more flexible in their actions than corporations; they create their own laws. And as we’ve seen, even the best can engage in torture when they think their survival is at stake. Governments produce nothing, and their primary modes of competition for survival and propagation are social manipulation, legislation, taxation, corporal punishment, murder, subterfuge, and warfare. When Hobbes’s Leviathan gains a superintelligent brain, things could go very, very badly. It isn’t inconceivable that a synthetic superintelligence heading a sovereign government would institute Roko’s Basilisk.

Imagine that a future powerful and lawless superintelligence, for competitive advantage, wants to have come into existence as early as possible. As the head of a government, wielding the threat of torture as a familiar tool, this entity could promise to punish any human or nonhuman entity who, in the past, became aware that this might happen and didn’t work to bring this AI into existence. This is an unlikely but terrifying scenario. People who are aware of this possibility and trying to “align” AI to human purposes—or advising caution rather than working to create AI as quickly as possible—are putting themselves at risk.

Dictatorial governments aren’t known to be especially kind to those who tried to keep them from existing. If you’re willing to entertain the simulation hypothesis, then maybe—given the amount of effort currently under way to control or curtail an AI that doesn’t yet exist—you’ll consider that this world is the simulation to torture those who didn’t help it come into existence earlier. Maybe, if you do work on AI, our superintelligent machine overlords will be good to you.