Toshiba Professor of Media Arts & Sciences, MIT; director, Human Dynamics Lab & Media Lab Entrepreneurship Program; author, Social Physics: How Good Ideas Spread—The Lessons from a New Science

The Global Artificial Intelligence (GAI) has already been born. Its eyes and ears are the digital devices all around us: credit cards, land-use satellites, cell phones, and of course the pecking of billions of people using the Web. Its central brain is rather like a worm at the moment—nodes that combine some sensors and some effectors—but the whole is far from what you’d call a coordinated intelligence.

Already many countries are using this infant nervous system to shape people’s political behavior and “guide” the national consensus: China’s Great Firewall, its siblings in Iran and Russia, and of course both major political parties in the United States. The national intelligence and defense agencies form a quieter, more hidden part of the GAI, but despite being quiet they’re the parts that control the fangs and claws. More visibly, companies are beginning to use this newborn nervous system to shape consumer behavior and increase profits.

While the GAI is newborn, it has very old roots. The fundamental algorithms and programming of the emerging GAI have been created by the ancient guilds of law, politics, and religion. This is a natural evolution, because creating a law is just specifying an algorithm, and governance via bureaucrats is how you execute the program of law. Most recently, newcomers such as merchants, social crusaders, and even engineers have dared to add their flourishes to the GAI. The results of all these laws and programming are an improvement over Hammurabi, but we’re still plagued by lack of inclusion, transparency, and accountability, along with poor mechanisms for decision making and information gathering.

However, in the last decades, the evolving GAI has begun to use digital technologies to replace human bureaucrats. Those with primitive programming and mathematical skills—namely lawyers, politicians, and many social scientists—have become fearful of losing their positions of power and so are making all sorts of noise about the dangers of allowing engineers and entrepreneurs to program the GAI. To my ears, the complaints of the traditional programmers sound rather hollow, given their repeated failures across thousands of years.

If we look at newer, digital parts of the GAI, we can see a pattern. Some new parts are saving humanity from the mistakes of the traditional programmers: Land-use space satellites alerted us to global warming, deforestation, and other environmental problems and gave us the facts to address those harms. Similarly, statistical analyses of health care, transportation, and work patterns have given us a worldwide network that can track global pandemics and guide public health efforts. On the other hand, some of the new parts—such as the Great Firewall, the NSA, and the U.S. political parties—are scary, because of the possibility that a small group of people can potentially control the thoughts and behavior of very large groups of people, perhaps without those people even knowing they’re being manipulated.

What this suggests is that it isn’t the Global Artificial Intelligence itself that’s worrisome, it’s how it’s controlled. If the control is in the hands of just a few people, or if the GAI is independent of human participation, then the GAI can be the enabler of nightmares. But if control is in the hands of a large and diverse cross section of people, then the GAI’s power is likely to be used to address problems faced by the entire human race. It’s to our common advantage that the GAI becomes a distributed intelligence with a large and diverse set of humans providing guidance.

Creation of an effective GAI is critical, because today the human race faces many extremely serious problems. The GAI we’ve developed over the last 4,000 years, mostly made up of politicians and lawyers executing algorithms and programs developed centuries ago, is not only failing to address these serious problems but is threatening to extinguish us.

For humanity as a whole to first achieve and then sustain an honorable quality of life, we need to carefully guide the development of our GAI. Such a GAI might be in the form of a reengineered United Nations that uses new digital intelligence resources to enable sustainable development. But because existing multinational governance systems have failed so miserably, such an approach may require replacing most of today’s bureaucracies with artificial intelligence “prosthetics”—digital systems that reliably gather accurate information and ensure that resources are distributed according to plan.

We already see this digital evolution improving the effectiveness of military and commercial systems, but it’s interesting to note that as organizations use more digital prosthetics they also tend to evolve toward more distributed human leadership. Perhaps instead of elaborating traditional governance structures with digital prosthetics, we’ll develop new, better types of digital democracy.

No matter how a new GAI develops, two things are clear. First, without an effective GAI, achieving an honorable quality of life for all of humanity seems unlikely. To vote against developing a GAI is to vote for a more violent, sick world. Second, the danger of a GAI comes from concentration of power. We must figure out how to build broadly democratic systems that include both humans and computer intelligences. It’s critical to start building and testing GAIs that both solve humanity’s existential problems and ensure equality of control and access. Otherwise we may be doomed to a future full of environmental disasters, wars, and needless suffering.