LET’S GET PREPARED!

MAX TEGMARK

Physicist, cosmologist, MIT; scientific director, Foundational Questions Institute; cofounder, Future of Life Institute; author, Our Mathematical Universe

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

To me, the most interesting question about artificial intelligence isn’t what we think about it but what we do about it.

In this regard, at the newly formed Future of Life Institute, we are engaging many of the world’s leading AI researchers to discuss the future of the field. Together with top economists, legal scholars, and other experts, we’re exploring all the classic questions:

What happens to humans if machines gradually replace us on the job market?
When, if ever, will machines outcompete humans at all intellectual tasks?
What will happen afterward? Will there be a machine-intelligence explosion leaving us far behind, and if so, what, if any, role will we humans play after that?

A great deal of concrete research needs to be done right now to ensure that AI systems become not only capable but also robust and beneficial, doing what we want them to do.

Just as with any new technology, it’s natural to first focus on making it work. But once success is in sight, it becomes timely to consider the technology’s societal impact and study how to reap the benefits while avoiding potential pitfalls. That’s why, after learning to make fire, we developed fire extinguishers and fire safety codes. For more powerful technologies, such as nuclear energy, synthetic biology, and artificial intelligence, optimizing the societal impact becomes progressively more important. In short, the power of our technology must be matched by our wisdom in using it.

Unfortunately, the necessary calls for the sober research agenda that’s sorely needed are being nearly drowned out by a cacophony of ill-informed views permeating the blogosphere. Let me briefly catalog the loudest few.

1.  Scaremongering: Fear boosts ad revenues and Nielsen ratings, and many journalists seem incapable of writing an AI article without a picture of a gun-toting robot.
2.  “It’s impossible”: As a physicist, I know that my brain consists of quarks and electrons arranged to act as a powerful computer, and that there’s no law of physics preventing us from building even more intelligent quark blobs.
3.  “It won’t happen in our lifetime”: We don’t know what the probability is of machines reaching human-level ability on all cognitive tasks during our lifetime, but most of the AI researchers at a recent conference put the odds above 50 percent, so we’d be foolish to dismiss the possibility as mere science fiction.
4.  “Machines can’t control humans”: Humans control tigers not because we’re stronger but because we’re smarter, so if we cede our position as the smartest on our planet, we might also cede control.
5.  “Machines don’t have goals”: Many AI systems are programmed to have goals and to attain them as effectively as possible.
6.  “AI isn’t intrinsically malevolent”: Correct—but its goals may one day clash with yours. Humans don’t generally hate ants, but if we wanted to build a hydroelectric dam and there was an anthill there, too bad for the ants.
7.  “Humans deserve to be replaced”: Ask any parent how they’d feel about your replacing their child by a machine and whether they’d like a say in the decision.
8.  “AI worriers don’t understand how computers work”: This claim was mentioned at the above-mentioned conference and the assembled AI researchers laughed hard.

Let’s not let the loud clamor about these red herrings distract us from the real challenge: The impact of AI on humanity is steadily growing, and to ensure that this impact is positive there are very difficult research problems that we need to buckle down and work on together. Because they’re interdisciplinary, involving both society and AI, they require collaboration between researchers in many fields. Because they’re hard, we need to start working on them now.

First, we humans discovered how to replicate some natural processes with machines that make our own wind, lightning, and horsepower. Gradually we realized that our bodies were also machines, and the discovery of nerve cells began blurring the borderline between body and mind. Then we started building machines that could outperform not only our muscles but our minds as well. So while discovering what we are, will we inevitably make ourselves obsolete?

The advent of machines that truly think will be the most important event in human history. Whether it will be the best or worst thing ever to happen to humankind depends on how we prepare for it, and the time to start preparing is now. One doesn’t need to be a superintelligent AI to realize that running unprepared toward the biggest event in human history would be just plain stupid.