Gigaom brings you our unique analysis and commentary on the present and future of AI.
Humans have to make moral and ethical choices every day. It's just part of being human. I wonder if we're going to reach a point though, where we allow machines to make those for us?
You no doubt know the example for self-driving cars of whether it decides to drive itself off a cliff or run over a person crossing the street. In the past, a human would have made that choice, and in the future a machine may make that choice.
Are we going to get so accustomed to that that a machine might make recommendations on who should receive medical care and who shouldn't? And what is an acceptable amount of risk for something, and what is an acceptable amount of pollution, and any number of other things that in the past humans have had to interject their ethical frameworks onto. But maybe math, embodied in artificial intelligence, becomes the new ethical framework, and maybe when we ask why did you do that, the ethical reply is well because the machine said to.