DESIGN MACHINES TO DEAL WITH THE WORLD’S COMPLEXITY

PETER NORVIG

Computer scientist; director of research, Google, Inc.; coauthor (with Stuart Russell), Artificial Intelligence: A Modern Approach

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

In 1950, Alan Turing wisely recognized that the question “Can machines think?” was not helpful and declared, “I shall replace the question with another.” What he did was to replace it with a series of tests measuring the capabilities of a machine by how well it performed, thus getting not a binary answer to “Can machines think?” but a detailed evaluation of “What tasks can machines do?”

So let’s explore what it is that machines can do.

In this forum and others, clever people tell us not to worry about AI, while equally clever people say we should. Whom do we believe? Pessimists warn that we don’t know how to safely and reliably build large, complex AI systems. They have a valid point. We also don’t know how to safely and reliably build large, complex non-AI systems. We need to do better at predicting, controlling, and mitigating the unintended consequences of the systems we build. For example, we invented the internal combustion engine 150 years ago, and in many ways it has served humanity well, but it has also led to widespread pollution, political instability over access to oil, more than a million traffic deaths per year, and (some say) a deterioration in the social cohesiveness of neighborhoods.

AI gives us powerful tools with which to build systems. And as with any powerful tool, the resulting systems will inevitably have both positive and unintended consequences. The interesting issues unique to AI are adaptability, autonomy, and universality.

Systems that use machine learning are adaptable. They change over time based on what they “learn” from examples. (While it remains linguistically controversial whether machines think, the vernacular has accepted the usage “machines learn.”) Adaptability is useful. We want, say, our automated spelling-correction programs to learn new terms, such as “bitcoin,” without waiting for a new dictionary edition to list them. But sometimes an adaptable program can be nudged, example by example, to the point where its responses are inaccurate. Just as bridge designers must deal with crosswinds, so the designers of AI systems must deal with these issues.

Some critics worry that many AI systems are built with a framework that maximizes expected utility. Such a system estimates the current state of the world, considers all possible actions it can take, simulates their possible outcomes, and then chooses the action leading to the best distribution of possible outcomes. It can make errors at any point along the way, but the concern here is in determining the best outcome—what it is that we desire. If we describe the wrong desires, we may get the wrong results. History shows this happening in all kinds of systems we build, not just in AI systems. The U.S. Constitution is like a computer program specifying our desires; the framers made what we now recognize as an error in specification, and well over 600,000 lives were lost before the Thirteenth Amendment corrected it. Similarly, we designed a stock-trading system that allows the creation of bubbles that led to busts. These are important issues for system design; the world is complicated, so acting correctly in the world is complicated.

With regard to autonomy: If AI systems act on their own, they can make errors that might not be made by a system with a human in the loop. Again, this valid concern is not unique to AI. Consider our system of automated traffic lights, which replaced the human direction of traffic once the number of cars exceeded the number of available policemen. The automated system leads to some errors, but this is deemed a worthwhile tradeoff. We’ll continue to make tradeoffs in our deployment of autonomous systems. We may eventually see a widespread increase in a range of autonomous systems that displace people, possibly leading to increased unemployment and income inequality—to me the most serious concern about potential future AI systems. In past technological revolutions—agricultural and industrial—the character of work changed, but the changes happened over generations rather than years, or decades, and always led to new jobs that replaced the old ones. We may be in for a period of much more rapid change that could alter the notion of a full-time job (a notion only a few centuries old).

In effect, a job ensures against variability, guaranteeing the employee a steady source of income even though he or she might make more as a freelancer or entrepreneur. Similarly, an employer might not need the employee all year long but is willing to pay for steady access to the employee’s availability. So full-time jobs provide stability but are slightly less optimal for both parties. If they’re largely replaced by automation, we’ll need some way to restore that stability.

Another issue is the universality of intelligent machines. In 1965, the British mathematician I. J. Good wrote that “an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”6 The reality is more nuanced.

As a species, we clearly value intelligence (we named ourselves after it), but in the real world, intelligence is only one of many attributes. The smartest person is not always the most successful; the wisest policies are not always those adopted. Recently I spent an hour reading about the Middle East situation, and thinking. I didn’t come up with a solution. Now imagine a hypothetical Speedup SuperIntelligence Machine (as described by Nick Bostrom) that can think as well as the smartest human but 1,000 times faster. I doubt if it would come up with a solution either. Computational complexity theory reveals a wide class of problems immune to intelligence, in the sense that no matter how clever you are, no approach is any better than trying all possible solutions; no matter how much computing power you have, it won’t be enough.

There are of course many problems where computing power does help. If I want to simulate the movements of billions of stars in a galaxy or compete in high-frequency stock trading, I’ll appreciate the help of a computer. As such, computers are tools that fit into niches to solve problems in societal mechanisms of our design. Think of AI simply as another society-changing invention like the internal combustion engine, the shovel, plumbing, or air-conditioning. And think of how to design mechanisms that make it easier to deal with the world’s complexity. Be careful when you use AI systems, because they have failure modes. Also be careful when you choose to use non-AI systems, because they too have failure modes. I’m not sure whether, on the whole, AI or non-AI systems are safer, more reliable, or more effective. I suggest using the best tools for the job, regardless of whether they’re labeled “AI” or not.