Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
How do you draw the line between prosecuting a robot that does harm and its creator? Who bears the burden of the crime or wrongdoing?
I recently got the chance to respond to a short story by a science fiction writer I admire. The author, Paulo Bacigalupi, imagines a detective investigating the “murder” of a man by his artificial companion. The robot insists it killed its owner intentionally in retaliation for abuse and demands a lawyer.
Today’s robots are not likely to be held legally responsible for their actions. The interesting question is whether anyone will be. If a driverless car crashes, we can treat the car like a defective product and sue the manufacturer. But where a robot causes a truly unexpected harm, the law will struggle. Criminal law looks for mens rea, meaning intent. And tort law looks for foreseeability.
If a robot behaves in a way no one intended or foresaw, we might have a victim with no perpetrator. This could happen more and more as robots gain greater sophistication and autonomy.
Do tricky problems in cyber law and robotics law keep you awake at night?
Yes: intermediary liability. Personal computers and smart phones are useful precisely because developers other than the manufacturer can write apps for them. Neither Apple nor Google developed Pokemon Go. But who should be responsible if an app steals your data or a person on Facebook defames you? Courts and lawmakers decided early on that the intermediary—the Apple or Facebook—would not be liable for what people did with the platform.
The same may not be true for robots. Personal robotics, like personal computers, is likely to rise or fall on the ingenuity of third party developers. But when bones instead of bits are on the line—when the software you download can touch you—courts are likely to strike a different balance. Assuming, as I do, that the future of robotics involves robot app stores, I am quite concerned that the people that make robots will not open them up to innovation due to the uncertainty of whether they will be held responsible if someone gets hurt.
Would prosecution against someone who harms a robotic be different than someone who harms a non-thinking or non-intelligent piece of machinery?
It could be. The link between animal abuse and child abuse, for instance, is so strong that many jurisdictions require authorities responding to an animal abuse allegation to alert child protective services if kids are in the house. Robots elicit very strong social reactions. There are reports of soldiers risking their lives on the battlefield to rescue a robot. In Japan, people have funerals for robotic dogs. We might wonder about a person who abuses a machine that feels like a person or a pet. And, eventually, we might decide to enhance penalties for destroying or defacing a robot beyond what we usually levy for vandalism. Kate Darling has an interesting paper on this.
Should citizens be concerned about robotic devices in their home compromising their privacy or about hackers attacking medical their medial devices? How legitimate or illegitimate are people’s fears about the rise of technology?
People should be concerned about robots and artificial intelligence but not necessarily for the reasons they read about in the press. Kate Crawford of Microsoft Research and I have been thinking through how society’s emphasis on the possibility of the Singularity or a Terminator distorts the debate surrounding the social impact of AI. Some think that superintelligent AI could be humankind’s “last invention.” Many serious computer scientists working in the field scoff at this, pointing out that AI and robotics are technologies still in their infancy. But despite AI’s limits, these same experts advocate introducing AI into some of our most sensitive social contexts such as criminal justice, finance, and healthcare. As my colleague Pedro Domingos puts it: The problem isn’t that AI is too smart and will take over the world. It’s that it is too stupid and already has.
Ryan Calo is a law professor at the University of Washington and faculty co-director of the Tech Policy Lab, a unique, interdisciplinary research unit that spans the School of Law, Information School, and Department of Computer Science and Engineering. Calo holds courtesy appointments at the University of Washington Information School and the Oregon State University School of Mechanical, Industrial, and Manufacturing Engineering. He has testified before the U.S. Senate and German Parliament and been called one of the most important people in robotics by Business Insider. This summer, he helped the White House organize a series of workshops on artificial intelligence.
@rcalo on Twitter