Gigaom brings you our unique analysis and commentary on the present and future of AI.
Explainability is a popular topic to discuss in artificial intelligence. The idea being that if an AI makes a decision that affects you, you have a right to know how it made that decision. Now, I think everyone agrees that there would be different levels of this, right? A diagnosis from a medical professional might require a lot of explainability, but a suggestion for a restaurant may not require any at all.
But I want to ask the question if explainability itself is something that people are going to insist on. I use as a reference point a credit score. You have a credit score, I have a credit score, and how it's calculated is completely opaque. There's no formula that you can go online and read and understand how it is computed. And yet, that single number has an enormous impact on your life. It decides whether you're going to get a loan, or anything like that. We've been willing to put up with that and not even question it all that much, and I wonder if artificial intelligence is going to be any different.