IT’S STILL EARLY DAYS

NICK BOSTROM

Professor, Oxford University; director, Future of Humanity Institute, Oxford Martin School; author, Superintelligence: Paths, Dangers, Strategies

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

First, what I think about humans who think about machines that think: I think that for the most part we’re too quick to form an opinion on this difficult topic. Many senior intellectuals are still unaware of the recent body of thinking that has emerged on the implications of superintelligence. There’s a tendency to assimilate any complex new idea to a familiar cliché. And for some bizarre reason, many people feel it’s important to talk about what happened in various science fiction novels and movies when the conversation turns to the future of machine intelligence (though one hopes that John Brockman’s admonition to the Edge commentators to avoid doing so here will have a mitigating effect on this occasion).

With that off my chest, I will now say what I think about machines that think: Machines are currently very bad at thinking except in certain narrow domains. They’ll probably one day get better at it than we are, just as machines are already much stronger and faster than any biological creature.

There’s little information about how far we are from that point, so we should use a broad probability distribution over possible arrival dates for superintelligence. The step from human-level AI to superintelligence will most likely be quicker than the step from current levels of AI to human-level AI (though, depending on the architecture, the concept of “human-level” may not make a great deal of sense in this context). Superintelligence could well be the best thing or the worst thing that will ever happen in human history, for reasons I have described elsewhere.

The probability of a good outcome is determined mainly by the intrinsic difficulty of the problem—what the default dynamics are and how difficult it is to control them. Recent work indicates that this problem is harder than one might have supposed. However, it’s still early days, and perhaps there’s some easy solution or things will work out without any special effort on our part.

Nevertheless, the degree to which we manage to get our act together will have some effect on the odds. The most useful thing we can do at this stage is to boost the tiny but burgeoning field of research that focuses on the superintelligence-control problem and study questions such as how human values can be transferred to software. The reason to push this now is partly to begin making progress on the control problem and partly to recruit top minds into this area, so that they’re already in place when the nature of the challenge becomes clearer. It looks like mathematics, theoretical computer science, and maybe philosophy are the disciplines most needed at this stage. That’s why there is an effort under way to drive talent and funding into this field and to begin to work out a plan of action.