Senior consultant (and former editor in chief), New Scientist; author, After the Ice: Life, Death, and Geopolitics in the New Arctic

High intelligence and warm feelings toward our fellow humans don’t go so well together in the popular imagination. The superintelligent villains of James Bond movies are the perfect example, always ruthless and intent on world domination. So it’s no surprise that first reactions to “machines that think” are of how they might threaten humankind.

What we’ve learned about the evolution of our intelligence adds to our fears. As humans evolved to live in ever larger social groups, compared to our primate relatives, so did the need to manipulate and deceive others, to label friends and foes, keep score of slights and favors, and all those other social skills we need to prosper individually. Bigger brains and “Machiavellian intelligence” were the result.

Still, we shouldn’t go on to believe that thinking is inextricably entangled with the need to compete with others and win, just because that was a driving force in the evolution of our intelligence. We can create artificial intelligence—or intelligences—without the perversities of human nature, and without that intelligence having any needs or desires at all. Thinking doesn’t necessarily involve the plotting and lusting of an entity that evolved first and foremost to survive. If you look around, it’s this neutral kind of artificial intelligence that’s already appearing everywhere.

It helps if we don’t view intelligence anthropocentrically, in terms of our own special human thinking skills. Intelligence has evolved for the same good reason in many different species: It’s there to anticipate the emerging future and help us deal with whatever the future throws at us—whether you need to dodge a rock or, if you’re a bacterium, sense a gradient in a food supply and figure which direction leads to a better outcome.

By recognizing intelligence in this more general way, we can see the many powerful artificial intelligences at our disposal already. Think of climate models. We can make good guesses about the state of the entire planet decades into the future and predict how a range of our own actions will change those futures. Climate models are the closest thing we have to a time machine. Think of all the high-speed computer models used in stock markets: All seek to know the future slightly ahead of everyone else and profit from that knowledge. So, too, do all those powerful models of your online buying behavior: All aim to predict what you’ll be likely to do, and to profit from that knowledge. As you gladly buy a book “Recommended Specially for You,” you’re already in the hands of an alien intelligence, nudging you to a future you wouldn’t have imagined alone and which may know your tastes better than you know them yourself.

Artificial intelligence is already powerful and scary, although we may debate whether it should be called “thinking” or not. And we’ve barely begun. Useful intelligence, some of it robotic, is going to keep arriving in bits and pieces of increasing power for a long time to come and will change our lives, perhaps with us continuing to scarcely notice. It will become an extension of us, like other tools. And it will make us ever more powerful.

We should worry about who will own artificial intelligence, for even some current uses are troubling. We shouldn’t worry about autonomous machines that might one day think in a humanlike way. By the time a clever humanlike machine gets built, if it ever does, it will come up against humans with their usual Machiavellian thoughts and long accustomed to wielding all the tools of artificial intelligence that made the construction of that thinking robot possible. It’s the robot that will feel afraid. We will be the smart thinking machines.