Use of Robots in War

7 Comments

The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.

The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”

One of those deep questions of our time:

Advancements in technology have always increased the destructive power of war. The development of AI will be no different. In this excerpt from The Fourth Age, Byron Reese considers the ethical implications of the development of robots for warfare.


Most of the public discourse about automation relates to employment, which is why we spent so much time examining it. A second area where substantial debate happens is around the use of robots in war.

Technology has changed the face of warfare dozens of times in the past few thousand years. Metallurgy, the horse, the chariot, gunpowder, the stirrup, artillery, planes, atomic weapons, and computers each had a major impact on how we slaughter each other. Robots and AI will change it again.

Should we build weapons that can make autonomous kill decisions based on factors programmed in the robots? Proponents maintain that the robots may reduce the number of civilian deaths, since the robots will follow protocols exactly. In a split second, a soldier, subject to fatigue or fear, can make a literally fatal mistake. To a robot, however, a split second is all it ever needs.

This may well be true, but this is not the primary motivation of the militaries of the world to adopt robots with AI. There are three reasons these weapons are compelling to them. First, they will be more effective at their missions than human soldiers. Second, there is a fear that potential adversaries are developing these technologies. And third, they will reduce the human casualties of the militaries that deploy them. The last one has a chilling side effect: it could make warfare more common by lowering the political costs of it.

The central issue, at present, is whether or not a machine should be allowed to independently decide whom to kill and whom to spare. I am not being overly dramatic when I say the decision at hand is whether or not we should build killer robots. There is no “can we” involved. No one doubts that we can. The question is, “Should we?”

Many of those in AI research not working with the military believe we should not. Over a thousand scientists signed an open letter urging a ban on fully autonomous weapon systems. Stephen Hawking, who also lent his name and prestige to the letter, wrote an editorial in 2014 suggesting that these weapons might end up destroying the species through an AI arms race.

Although there appears to be a lively debate on whether to build these systems, it seems somewhat disingenuous. Should robots be allowed to make a kill decision? Well, in a sense, they have been for over a century. Humans were perfectly willing to plant millions of land mines that blew the legs off a soldier or a child with equal effectiveness. These weapons had a rudimentary form of AI: if something weighed more than fifty pounds, they detonated. If a company had marketed a mine that could tell the difference between a child and soldier, perhaps by weight or length of stride, they would be used because of their increased effectiveness. And that would be better, right? If a newer model could sniff for gunpowder before blowing up, they would be used as well for the same reason. Pretty soon you work your way up to a robot making a kill decision with no human involved. True, at present, land mines are banned by treaty, but their widespread usage for such a long period suggests we are comfortable with a fair amount of collateral damage in our weapon systems. Drone warfare, missiles, and bombs are all similarly unprecise. They are each a type of killer robot. It is unlikely we would turn down more discriminating killing machines. I am eager to be proved wrong on this point, however. Professor Mark Gubrud, a physicist and an adjunct professor in the Curriculum in Peace, War, and Defense at the University of North Carolina, says that with regards to autonomous weapons, the United States has “a policy that pretends to be cautious and responsible but actually clears the way for vigorous development and early use of autonomous weapons.”

And yet, the threats that these weapon systems would be built to counter is real. In 2014, the United Nations held a meeting on what it calls “Lethal Autonomous Weapons Systems.” The report that came out of that meeting maintains that these weapons are also being sought by terrorists, who will likely get their hands on them. Additionally, there is no shortage of weapon systems currently in development around the world that utilize AI to varying degrees. Russia is developing a robot that can detect and shoot a human from four miles away using a combination of radar, thermal imaging, and video cameras. A South Korean company is already selling a $40,000,000 automatic turret which, in accordance with international law, shouts out a “turn around and leave or we will shoot” message to any potential target within two miles. It requires a human to okay the kill decision, but this was a feature added only due to customer demand. Virtually every country on the planet with a sizable military budget, probably about two dozen nations in all, is working on developing AI-powered weapons.

How would you prohibit such weapons even if there were a collective will to do so? Part of the reason nuclear weapons were able to be contained is because they are straightforward. An explosion was either caused by a nuclear device or not. There is no gray area. Robots with AI, on the other hand, are as gray as gray gets. How much AI would need to be present before the weapon is deemed to be illegal? The difference between a land mine and the Terminator is only a matter of degree.

GPS technology was designed with built-in limits. It won’t work on an object traveling faster than 1,200 miles per hour or higher than 60,000 feet. This is to keep it from being used to guide missiles. But software is almost impossible to contain. So the AI to power a weapons system will probably be widely available. The hardware for these systems is expensive compared with rudimentary terrorist weapons, but trivially inexpensive compared with larger conventional weapon systems.

Given all this, I suspect that attempts to ban these weapons will not work. Even if the robot is programmed to identify a target and then to get approval from a human to destroy it, the approval step can obviously be turned off with the flip of a switch, which, eventually, would undoubtedly happen.

An AI robot may be perceived as such a compelling threat to national security that several countries will feel that they cannot risk not having them. During the Cold War, the United States was frequently worried about perceived or possible gaps in military ability with potentially belligerent countries. The bomber gap of the 1950s and the missile gap of the 1960s come to mind. An AI gap is even more fearsome for those whose job it is to worry about the plans of those who mean the world harm.


To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.

Comment

Community guidelines

Be sure to review our Community Guidelines. By continuing you are agreeing to our Terms of Service and Privacy Policy.

7 Comments

Mike Cole

Artificial Intelligence would turn against to the creator as well. Robots can be more destructive on both sides.

Reply
Save Future

Nice Article!!
about How much our future is Secure??
I think we should build a community, with the motto “No Robots for War”. Where every country gives a commitment, they will not use robots to encourage their army.

Reply
Scott B. Ager

Indeed, such a scenario is not Sci Fi, although some might have thought so a decade ago, now it seems more of a reality and the actual direction things are progressing.

Reply
Amphasis

I really cannot imagine what it would be like when robots are deployed in the battle field, I think recharging them might be an issue.

Reply
vineet

Valuable information, I have already been searching for several times but this one is the correct one! You have just received one shared!

Reply
David E. Kalman

That is well explained. I’d hate to be a robot on the future, none of the glory and if you aren’t destroyed in the battle, you’ll be recycled for parts if you lose? Being a robot just has no advantages in the future, and you thought your life was tough human?

Reply