KILLER THINKING MACHINES KEEP OUR CONSCIENCE CLEAN

KURT GRAY

Assistant professor of psychology, University of North Carolina, Chapel Hill

广告:个人专属 VPN,独立 IP,无限流量,多机房切换,还可以屏蔽广告和恶意软件,每月最低仅 5 美元

Machines have long helped us kill. From catapults to cruise missiles, mechanical systems have allowed humans to better destroy one another. Despite the increased sophistication of killing machines, one thing has remained constant—human minds are always morally accountable for their operation. Guns and bombs are inherently mindless, and so blame slips past them to the person who pulled the trigger.

But what if machines had enough of a mind that they could choose to kill on their own? Such a thinking machine could retain the blame for itself, keeping clean the consciences of those who benefit from its work of destruction. Thinking machines may improve the world in many ways, but they may also let people get away with murder.

Humans have long sought to distance themselves from acts of violence, reaping the benefits of harm without sullying themselves. Machines not only increase destructive power but also physically obscure our harmful actions. Punching, stabbing, and choking have been replaced by the more distant—and tasteful—actions of button pressing or lever pulling. However, even with the increased physical distance allowed by machine intermediaries, our minds continue to ascribe blame to those people behind them.

Studies in moral psychology reveal that humans have a deep-seated urge to blame someone or something in the face of suffering. When others are harmed, we search not only for a cause but a mental cause—a thinking being who chose to cause the suffering. This thinking being is typically human but need not be. In the aftermath of hurricanes and tsunamis, people often blame the hand of God, and in some historical cases people have even blamed livestock—French peasants once placed a pig on trial for murdering a baby.

Generally, our thirst for blame requires only a single thinking being. When we find one thinking being to blame, we’re less motivated to blame another. If a human is to blame, there’s no need to curse God. If a low-level employee is to blame, there’s no need to fire the CEO. And if a thinking machine is to blame for someone’s death, then there’s no need to punish the humans who benefit.

Of course, for a machine to absorb blame, it must be a legitimate thinker and act in new, unpredicted ways. Perhaps machines could never do something truly new, but the same argument applies to humans “programmed” by evolution and their cultural context. Consider children, who are undoubtedly programmed by their parents and yet—through learning—are able to develop novel behavior and moral responsibility. Like children, modern machines are adept at learning, and it seems inevitable that they’ll develop contingencies unpredicted by their programmers. Already, algorithms have discovered new things unguessed by the humans who created them.

Thinking machines might make their own decisions but shield humans from blame only when they decide to kill, standing between our minds and the destruction we desire. Robots already play a large role in modern combat: Drones have killed thousands in the past few years but are currently fully controlled by human pilots. To deflect blame in this case, the drones must be governed by other intelligent machines; machines must learn to fly Predators all on their own.

This scenario may send shivers down spines (including mine), but it makes cold sense from the perspective of policy makers. If collateral damage can be blamed on the decisions of machines, then military mistakes are less likely to dampen someone’s election chances. Moreover, if minded machines can be overhauled or removed—machine “punishment”—people will feel less need to punish those in charge, whether for fatalities of war, botched (robotic) surgeries, or (autonomous) car accidents.

Thinking machines are complex, but the human urge to blame is relatively simple. Death and destruction compel us to find a single mind to hold responsible. Sufficiently smart machines—if placed between destruction and ourselves—should absorb the weight of wrongdoing, shielding our own minds from others’ condemnation. We should all hope that this prediction never comes true, but when advancing technology collides with modern understandings of moral psychology, dark potentials emerge. To keep clean our consciences, we need only to create a thinking machine and then vilify it.