« All Episodes: Gigaom AI Minute – July 17

-
-
0:00
0:00
0:00

Do you have the right to an explanation of why an artificial intelligence made the suggestion or choice that it did? This will become an evermore contentious issue. There is a provision before the EU, which is already adopted in France, which requires companies that use artificial intelligence to make determinations about customers to be able to offer an explanation as to why the choice was made. This seems, on the one hand, very reasonable in cases having to do with pricing of life insurance for instance, but on the other hand, as artificial intelligence becomes more more complicated, the why is incredibly difficult to tease out of the data. When a neural net, for instance, is using hundreds and thousands of different data sources across billions and billions of data points to come to the conclusion that it does, the explanation may be beyond human comprehension. So, the question becomes, do we decide those are not legitimate uses of artificial intelligence, or do we just accept that there are some things that the AI will choose that we will never understand: