MAKE THE THING IMPOSSIBLE TO HATE

RORY SUTHERLAND

Creative director and vice-chair, Ogilvy Group, U.K.; columnist, The Spectator (London)

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

One possibility, of course, is that some malign superintelligence already exists on Earth but is shrewd enough to disguise its existence, its intentions, or its intelligence. I don’t think this act of deception would be particularly difficult; we aren’t very good at spotting what to fear.

For most of evolutionary time, the most salient avoidable threats to our survival came from things that were roughly the same size as we were and actively wanted to hurt us—ferocious animals, for instance, or other people. Over time, we got pretty good at recognizing something or someone who was nasty. We also learned to minimize the risk of infection, but we learned this unwittingly, through instinctive revulsion, social norms, or religious observance. We didn’t spend much time consciously thinking about germs, for the simple reason that we didn’t know they existed.

To sell products that promote hygiene, consumer-goods companies have plowed billions of dollars into advertising campaigns that dramatize the risk of bacteria, or sell the idea of cleanliness obliquely through appeals to social status. I can confidently predict that nobody will ever come into my office suggesting an advertising campaign to raise awareness of the risk you run when approaching an escaped tiger.

So, when we think about threats from technology, we automatically fall back on instincts honed a million years ago. This is why the first prototype for a driverless car has been designed to look so damnably cute—in short, like a puppy on wheels. It can travel only at relatively low speeds and is small and light, but it also artfully exploits pareidolia and our parental urges with its infantlike, wide-eyed facial expression and little button nose. My inner marketer admires this. It’s exactly what I would have recommended: Make the thing impossible to hate. Even if the technology is ultimately more dangerous than an AK47, I find it hard to imagine myself taking an axe to it in a fit of Luddism.

But is it a mental patch or a mental hack? Is it designed to look cute to overcome an unwarranted innate fear of such technologies, or is it a hack to lull us into a false confidence? I don’t know. Our fear of driverless cars might be akin to the fear that our children will be kidnapped (high in saliency, low in probability)—or, it might be justified. But our level of fear will be determined by factors (including cuteness) not really relevant to the level of threat.

Which brings me to a second question.

Though the driverless car looks cute, we’re at least aware of possible dangers. It seduces us, but we’re aware of being seduced. Are there already in existence technologies (in the broadest sense) that have seduced us so effectively, and been adopted so quickly and widely, that we may learn of their risks only through a sudden, unexpected, and immense problem? What might be the technological equivalent of potato blight?

Our current belief in “technological providence” is so strong that it would be fairly easy for us all to fall into this trap—where we’re so excited by something new that we fail to notice what other things it might give rise to until it’s too late. For the first few hundred years, gunpowder was used not for warfare but for entertainment.

And just as airline pilots regularly practice landing by hand, even though they’re rarely required to operate without an autopilot, should we, too, set aside periods in our life when we deliberately eschew certain technologies just to remind ourselves how to live without them, to maintain technological diversity, to keep in trim the mental muscles made weak through underuse? Perhaps. But what the mechanism is for coordinating this behavior among large groups of people, I don’t know.

I recently proposed that companies adopt a weekly “e-mail sabbath,” because I believed that the overuse of e-mail was driving into extinction other forms of valuable interaction. We’re losing the knack of communicating in other ways. Most people thought I was mad. A few hundred years ago, a pope or rabbi might have told us to do this—or the Archbishop of Canterbury. There’s nobody now.

I always fear cock-ups more than conspiracies. Compared to the threat of the unintended consequence, the threat of intentionally evil cyborgs is remote enough that it can be safely left to Hollywood for now.