SOFT AUTHORITARIANISM

MICHAEL VASSAR

Futurist; founder and chief science officer, BayesCraft

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

“Think? It’s not your job to think! I’ll do the thinking around here.”
INTELLIGENT UNTHINKING SYSTEM, TO INTELLIGENT THINKING SYSTEM

Machines that think are coming. Right now though, think about intelligent tools. Intelligent tools don’t think. Search engines don’t think. Neither do robot cars. We humans often don’t think either. We usually get by, as other animals do, on autopilot. Our bosses generally don’t want to see us thinking; that would make things unpredictable and threaten their authority. If machines replace us wherever we aren’t thinking, we’re in trouble.

Let’s assume think refers to everything humans do with brains. Experts call a thinking machine a general artificial intelligence. They agree that such a machine could make us extinct. Extinction, however, isn’t the only existential risk. In the eyes of machine-superintelligence expert Nick Bostrom, an existential risk is one that can “annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”11 Examples of existential risk include the old standby nuclear war, new concerns like runaway global warning, fringe hypotheses like hypothetical particle-accelerator accidents, and the increasingly popular front-runner, general artificial intelligence. Over the next couple of decades, though, the most serious existential risks come from kinds of intelligence that don’t think and new kinds of soft authoritarianism, which may emerge in a world where most decisions are made without thinking.

Some of the things people can do with brains are impressive and unlikely to be matched by software anytime soon. Writing a novel, seducing a lover, or building a company are far beyond the abilities of intelligent tools. So, of course, is the invention of a machine that can truly think. On the other hand, most thinking can be improved upon with thin slicing, which can be improved with procedures, which are almost never a match for algorithms. In medical diagnosis and decision making, for instance, ordinary medical judgment is improved by introducing checklists, although humans with checklists are less reliable than AI systems. Automated nursing isn’t even on the horizon, but a hospital where machines made all the decisions would be a much safer place to be a patient—and it’s hard to argue against that sort of objectivity.

The more we leave our decisions to machines, the harder it becomes to take back control. In a world where self-driving cars are the norm and traffic casualties have been reduced to nearly zero as a result, it will be seen as irresponsible and probably illegal for a human to drive. Might it become equally objectionable for investors to invest in businesses that depart from statistically established best practices? For children to be educated in ways that have been determined to lead to lower life expectancy or income? If so, will values that aren’t easily represented by machines, such as a good life, tend to be replaced with correlated but distinct metrics, such as serotonin and dopamine levels? It’s easy to overlook the implicit authoritarianism that sneaks in with such interpretations of value, yet any society that pursues good outcomes has to decide how to measure the good, a problem that will be upon us before we have thinking machines to help us think it through.