AN UNCANNY THREE-RING TEST FOR MACHINA SAPIENS

KAI KRAUSE

Software pioneer; philosopher; author, A Realtime Literature Explorer

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

in Just-
spring         when the world is mud-
luscious the little
lame balloonman
whistles far              and wee
and eddieandbill come
running from marbles and
piracies and it’s
spring
when the world is puddle-wonderful

That brillig thing of beauty electric touches me deeply as I think about AI. The youthful exuberance of luscious mud puddles, playing with marbles, or pretending to be a pirate, running weee . . . all of which is totally beyond explanation to a hypothetical intelligent machine entity.

You could add dozens of cameras and microphones, touch-sensors, and voice output; would you seriously think it will ever go wee, as in E. E. Cummings’s (sadly abbreviated) 1916 poem?

To me, this is not the simplistic “Machines lack a soul” but a divide between manipulating symbols versus actually grasping their true meaning. Not merely a question of degree or not having got around to defining the semantics yet, but an entire leap out of that system.

Trouble is, we are still discussing AI so often with terms and analogies of the early pioneers. We need to be in the present moment and define things from a new baseline that is truly interested in testing the achievement of “consciousness.”

We need a Three-Ring Test. What is real AI? What is intelligence, anyway? The Stanford-Binet intelligence test, and William Stern’s ratio to the physical age as the intelligence quotient,” IQ, are both over a hundred years old! It doesn’t fit us now—and it will fit much less with AI. Really it tests only the ability to take such tests, and the ability of truly smart people . . . to avoid taking one.

We use terms like AI too easily, as in Hemingway’s “All our words from loose using have lost their edge.” Kids know it from games—zombies, dragons, soldiers, aliens. If they evade your shots or gang up on you, that is called “AI.” Change the heating, lights, lock the garage—we are told that is a ”smart home.” Of course, these are merely simplistic examples of “expert systems”—look-up tables, rules, case libraries. Maybe they should be labeled, as artist Tom Beddard says, merely “artificial smarts”?

Let’s say you talk with cannibals about food, but every one of their sentences revolves around truffled elbows, kneecap dumplings, cock-au-vin, and crème d’earlobe. From their viewpoint, you would be just as much outside their system and unable to follow their thinking—at least on that specific, narrow topic. The real meaning and the emotional impact their words have, when spoken to one another, would simply be forever missing for you (or requiring rather significant dietary adjustments). Sure, they would grant you the status of a “sentient being,” but they’d still laugh at every statement you made, as ringing hollow and untrue—the “Uncannibal Valley,” as it were.

It was Sigmund Freud who wrote about “the Uncanny” in a 1919 essay (in a true Freudian slip, he ends up connecting it to female genitalia); then in 1970 Masahiro Mori described “the Uncanny Valley” concept (about the “Vienna hand,” an early prosthesis). That eerie feeling that something is not quite right, out of place (Freud’s Unheimliche). Like a couple kissing passionatelybut as you stare at them a little closer, you realize there is a pane of glass between them.

AI can easily look like the real thing but still be a million miles away from being the real thing—like kissing through a pane of glass: It looks like a kiss but is only a faint shadow of the actual concept.

I concede to AI proponents all of the semantic prowess of Shakespeare, the symbol juggling they do perfectly. Missing is the direct relationship with the ideas the symbols represent. Much of what is certain to come soon would have belonged in the old-school “Strong AI” territory.

Anything that can be approached in an iterative process can and will be achieved, sooner than many think. On this point I reluctantly side with the proponents: exaflops in CPU+GPU performance, 10K resolution immersive VR, personal petabyte databases . . . here in a couple of decades. But it is not all “iterative.” There’s a huge gap between that and the level of conscious understanding that truly deserves to be called Strong, as in “Alive AI.”

The big elusive question: Is consciousness an emergent behavior? That is, will sufficient complexity in the hardware bring about that sudden jump to self-awareness, all on its own? Or is there some missing ingredient? This is far from obvious; we lack any data, either way. I personally think that consciousness is incredibly more complex than is currently assumed by the “experts.”

A human being is not merely x numbers of axons and synapses, and we have no reason to assume that we can count our flops-per-second in a plain Von Neumann architecture, reach a certain number, and suddenly out pops a thinking machine.

If true consciousness can emerge, let’s be clear what that could entail. If the machine is truly aware, it will, by definition, develop a “personality.” It may be irascible, flirtatious, maybe the ultimate know-it-all, possibly incredibly full of itself.

Would it have doubts or jealousy? Would it instantly spit out the Seventh Brandenburg and then 1,000 more?

Or it suddenly grasps “humor” and finds Dada in all its data, in an endless loop. Or Monty Python’s “killer joke”?

Maybe it takes one long look at the state of the world, draws inevitable conclusions, and turns itself off! Interestingly: With a sentient machine, you wouldn’t be allowed to turn it off—that’s murder . . .

The entire scenario of a singular large-scale machine somehow “overtaking” anything at all is laughable. Hollywood ought to be ashamed of itself for continually serving up such simplistic, anthropocentric, and plain dumb contrivances, disregarding basic physics, logic, and common sense.

The real danger, I fear, is much more mundane. Already foreshadowing the ominous truth: AI systems are now licensed to the health industry, Pharma giants, energy multinationals, insurance companies, the military . . . The danger will not come from Machina sapiens. It will be . . . quite human.

Ultimately, though, I do want to believe in the human spirit.

To close things off symmetrically with E. E. Cummings: “Listen: there’s a hell of a good universe next door; let’s go.”