In this episode, Byron talks about explainability.
Transcript
One of the key concepts in artificial intelligence is explainability. That is that if an artificial intelligence makes a decision, we want it to be explainable. We want to understand why it made that decision. This is especially the case when it comes to things like medical diagnoses. People don't have the trust in the system, and therefore they wouldn't necessarily trust its conclusions when they don't understand why it made them.
But I wonder just how realistic explainability is. You can't have explainability unless you have understandability, right? I mean, explaining something in a way that it cannot be understood by a human isn't really explainability at all. So then you have to ask the question, "Can humans understand everything that these systems do?" It may not be the case that they can.
A hurricane heading in some direction veers off to another direction. Why did the hurricane do that? Why did the hurricane not hit Tampa, but hit North Carolina instead? Well, there's, in one sense, an answer to that: "Well, there was a low pressure ridge that emerged." But that just kicks the can down the street, doesn't it? Because then you say, "Well, why did the low pressure ridge emerge?" and then you go all the way back.
Something as complicated as weather doesn't necessarily have a "why." It's the butterfly's wings in Argentina that had something to do with that hurricane behaving the way it does. And therefore, to that extent, we may get an explanation, and it may even be an explanation about the system that's accurate, but in the end it may not tell us very much at all. It may not be something that gives us true understanding.
- Subscribe to Gigaom AI Minute
- iTunes
- Google Play
- Spotify
- Stitcher
- RSS