Assistant professor of media arts & sciences and founder, Playful Systems group, MIT Media Lab; cofounder, Everybody at Once

What force is really in control,
The brain of a chicken or binary code?
Who knows which way I’ll go, X’s or O’s?

In the 1980s, New York City’s Chinatown had the dense gravity of Chinatown Fair, a video arcade on Mott and Bowery. Beyond the Pac-Man and Galaga stand-ups was the one machine you’d never find anywhere else: Tic-Tac-Toe Chicken.

It was the only machine that was partly organic, the only one with a live chicken inside. As best I could tell, the chicken could play Tic-Tac-Toe effectively enough to score a tie against any human. Human opponents would enter their moves with switches, and the chicken would make her way over to an empty position on the illuminated Tic-Tac-Toe grid on the floor of the cage, which displayed both players’ moves.

More than once, when I was cutting high school trig, I stood in front of that chicken, wondering how all this worked. There was no obvious positive reinforcement (e.g., grain), so I could imagine only the negative reinforcement of a light electrical current running through the “wrong moves” in the cage, routing the chicken to the one point on the grid that could produce a draw.

When I think about thinking machines, I think about that chicken. Had the Chinatown Fair featured a Tic-Tac-Toe Computer, it would never have competed with high school, let alone Pac-Man. It’s a well-known and banal truth that even a rudimentary computer can understand the game. That’s why we were captivated by the chicken.

The magic was in imagining a thinking chicken, much the same way that in 2015 there’s magic in imagining a thinking machine. But if the chicken wasn’t thinking about Tic-Tac-Toe but could still play it successfully, why do we say the computer is thinking when it’s playing Tic-Tac-Toe?

To say so is tempting, because we have a model of our brain—electricity moving through networks—coincidentally congruent with the models we build for machines. This congruence may or may not prove to be the convenient reality, but either way, what makes it seem like thinking is not simply the ability to calculate the answers but the sense that there’s something wet and messy in there. In 2015, perversely, it’s machines that make mistakes and humans that have to explain the mistakes.

We look to the irrational when the rational fails us, and it’s the irrational part that reminds us the most of thinking. The physicist David Deutsch has suggested a framework for distinguishing the answers machines provide from the explanations that humans need. And I believe that for the foreseeable future we’ll continue to look to biological organisms when we seek explanations. Not just because brains are better at that task but because it’s not what machines aspire to.

It’s boring to lose to a computer but exciting to lose to a chicken, because somehow we know that the chicken is more like us—certainly more so than the electrified grid underneath her feet. For as long as thinking machines lack the limbic presence and imprecision of a chicken, computers will keep doing what they’re so good at: providing answers. And as long as life is about more than answers, humans—and yes, even chickens—will stay in the loop.