Gigaom AI Minute- July 7

:: ::

In this episode, Byron talks about explainability.

Transcript

In a prior AI Minute, I talked a little bit about explainability--how people want an explanation on why an AI makes a decision. I talked about how explanations implied understandability and that some decisions by AIs may not be understandable by humans. Why would this be? In large part, it's because AI models are systems, and they're systems with an enormous number of levers.

If you were to ask a question about weather on the earth, broadly-speaking, then you'd have to say, "Well, there really isn't a single 'why' any anything that happens." There are the oceans and there's the solar winds, and there's solar activity and there's vegetation, and there's all of these other factors, but in addition to the complexity if it, is that every single factor in it is interdependent on other factors. So, there's no way to understand just part of the system. The only way to understand the system is to understand the entirety of the system and how everything within it interacts.

As AI models become more and more complex, this probably isn't a feasible thing. Your oven, which has a thermostat and a heater and a few basic components, is a system that you can understand. An AI model with billions of pieces of data, thousands of different fields and different weightings and all the rest, may be a system that’s beyond understanding.

Interested in sponsoring one of our podcasts? Have a suggestion for a great guest? Please contact us and let us know.