MANIPULATORS AND MANIPULANDA

JOSH BONGARD

Associate professor of computer science, University of Vermont; coauthor (with Rolf Pfeifer), How the Body Shapes the Way We Think

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Place a familiar object on a table in front of you, close your eyes, and manipulate that object so that it hangs upside down above the table. Your eyes are closed so that you can focus on your thinking. Which way did you reach out, grasp, and twist that object? What sensory feedback did you receive to know whether you were succeeding or failing? Now close your eyes again and think about manipulating someone you know into doing something he might not want to do. Again, observe your own thinking: What strategies might you employ? If you implement those strategies, how will you distinguish success from stalemate?

Although much recent progress has been made in building machines that sense patterns in data, most people feel that general intelligence involves action—reaching some desired goal, or failing that, keeping one’s options open. It’s hypothesized that this embodied approach to intelligence allows humans to use physical experiences (such as manipulating objects) as scaffolding for learning more subtle abilities (such as manipulating people). But our bodies shape the kinds of physical experiences we have. For example, we can manipulate only a few objects at once, because we have only two hands. Perhaps this limitation also constrains our social abilities in ways we have yet to discover. The cognitive linguist George Lakoff taught us that we can find clues to the body-centrism of thinking in metaphors: We counsel one another not to “look back” in anger because, based on our bias to walk in the direction of our forward-facing eyes, past events tend to literally be behind us.

So in order for machines to think, they must act. And in order to act they must have bodies to connect physical and abstract reasoning. But what if machines don’t have bodies like ours? Consider Hans Moravec’s hypothetical Bush Robot: Picture a shrub in which each branch is an arm and each twig is a finger. This robot’s fractal nature would allow it to manipulate thousands or millions of objects simultaneously. How might such a robot differ in its thinking about manipulating people, compared with the way people think about manipulating people?

One of many notable deficiencies in human thinking is dichotomous reasoning—believing something is black or white rather than considering its particular shade of gray. But we’re rigid and modular creatures; our branching set of bones houses fixed organs and supports fixed appendages with specific functions. What about machines that aren’t so black and white? Thanks to advances in materials science and 3-D printing, soft robots are starting to appear. Such robots can change their shape in extreme ways; they may in the future be composed of 20 percent battery and 80 percent motor at one place on their surface, 30 percent sensor and 70 percent support structure at another, and 40 percent artificial material and 60 percent biological matter someplace else. Such machines may be much better able to appreciate gradations than we can.

Let’s go deeper. Most of us have no problem using the singular pronoun I to refer to the tangle of neurons in our heads. We know exactly where we end and the world—and other people—begins. But consider modular robots, small cubes or spheres that can physically attach and detach to one another at will. How would such machines approach the self/nonself discrimination problem? Might such machines be able to empathize more strongly with other machines (and maybe even people) if they could physically attach to them or even become part of them?

That’s how I think machines will think: in a familiar way because they’ll use their bodies as tools to reason about the world, yet in an alien way because bodies different from human ones will lead to very different modes of thought.

But what do I think about thinking machines? I find the ethical side of thinking machines straightforward. Their dangerousness will correlate exactly with how much leeway we give them in fulfilling the goals we set for them. Machines told to “detect and pull broken widgets from the conveyer belt in the best way possible” will be quite useful, intellectually uninteresting, and likely to destroy more jobs than they create. Machines instructed to “educate this recently displaced worker”—or young person—“in the best way possible” will create jobs and possibly inspire the next generation. Machines commanded to “survive, reproduce, and improve in the best way possible” will give us the most insight into all the different ways in which entities may think, but they will probably give us humans a very short window of time in which to relish that insight. AI researchers and roboticists will sooner or later discover how to create all three of those species. Which ones we wish to call into being is up to us.