Writer, artist, designer; author, Worst. Person. Ever.

Let’s quickly discuss larger mammals. Take dogs: We know what a dog is, and we understand “dogginess.” Look at cats: We know what cats are and what “cattiness” is. Now take horses. Suddenly it gets harder. We know what a horse is, but what is “horsiness”? Even my friends with horses have trouble describing horsiness to me. And now take humans: What are we? What is “humanness”?

It’s sort of strange, but here we are, 7 billion of us now, and nobody really knows the full answer to these questions. One undeniable thing we humans do, though, is make things, and through these things we find ways of expressing humanness—ways we didn’t previously know of. The radio gave us Hitler and the Beach Boys. Barbed wire and air-conditioning gave us western North America. The Internet gave us a vanishing North American middle class and kitten GIFs.

It’s said that new technologies alienate people, but the thing is, UFOs didn’t land and hand us new technologies—we made them ourselves, and thus they can only ever be, well, humanating. And this is where we get to AI. People assume that AI, or machines that think, will have intelligence alien to our own, but that’s not possible. In the absence of benevolent space aliens, only we humans will have created any nascent AI, and thus it can only mirror, in whatever manner, our humanness or specieshood. So when people express concern about alien intelligence, or the Singularity, what I think they’re really expressing is angst about those unpretty parts of our collective being that currently remain unexpressed but will become, somehow, dreadfully apparent with AI.

As AI will be created by humans, its interface will be anthropocentric, just as AI designed by koala bears would be koalacentric. This means AI software will be humankind’s greatest coding kludge, as we try molding it to our species’ specific needs and data. Fortunately, anything smart enough to become sentient will probably be smart enough to rewrite itself from AI into cognitive simulation, at which point our new AI could become, for better or worse, even more human. We all hope for a Jeeves & Wooster relationship with our sentient machines, but we also need to prepare ourselves for a Manson & Fromme relationship; they’re human too.

Personally I wonder whether the software needed for AI will be able to keep pace with the hardware in which it can live. Possibly the smart thing for us to do right now would be to set up a school whose sole goal is to imbue AI with personality, ethics, and compassion. It’s certainly going to have enough data to work with, once it’s born. But how best to deploy your grade-six report card, all of Banana Republic’s returned merchandise data for 2037, and all of Google Books?

With the start of the Internet, we mostly had people communicating with other people. As time goes by, we increasingly have people communicating with machines. We all get excited about AI possibly finding patterns deep within metadata, and as the push grows to decode those profound volumes of metadata, the Internet will become largely about machines speaking with other machines—and what they’ll be talking about, of course, is us, behind our backs.