Gigaom AI Minute – January 22

:: ::

In this episode, Byron talks about how autonomous weapons are already in use.

Transcript

In a previous AI Minute, I asked the question, "Should we build autonomous killing robots for the military?" Today I'd like to ask a slightly different question: Will we in fact do it? Although there seems to be a lively debate as to whether to build these systems, it is somewhat disingenuous.

Should robots be allowed to make kill decisions? Well, in a sense they have been doing so for over a century. Humans were perfectly willing to plant millions of landmines that blew the legs off a soldier or a child with equal effectiveness. They had a rudimentary form of AI. If something weighed more than 50 pounds, they detonated.

If a company had marketed a mine that could tell the difference between a child and a soldier, perhaps by weight or length of stride, they would have been used because of their increased effectiveness. And that would be better, right? If a newer model came out that could sniff gunpowder before blowing up, they would be used as well for the same reason.

Pretty soon, you work your way all the way up to a robot making a kill decision with no human involved. True, at present, landmines are banned by treaty, but their widespread usage for such a long period of time suggests we're comfortable with a fair amount of collateral damage in our weapons systems. Drone warfare, missiles and bombs are similarly imprecise. They are a type of killer robot. It is unlikely that we would turn down more discriminating killing machines.

Interested in sponsoring one of our podcasts? Have a suggestion for a great guest? Please contact us and let us know.