Overview
Explainable AI
What it is: A subset of AI decision making where the computer’s decision is explainable to humans and addresses the need to understand how conclusions are reached.
What it does: Various types of Artificial Intelligence crunch absurd amounts of data to make decisions and predictions better and faster than any human could. However, when thousands or even hundreds of thousands of variables are at play, it can be impossible to explain why the AI chose X over Y.
Why it matters: AI is still evolving, but has come under the spotlight for inaccurate and divisive conclusions based on poor data sets. In some cases, it isn’t important. No one may really care why the AI selected a person’s address for a 100,000 piece direct mail campaign. In other areas, such as medical interventions or launching missiles, it is mission critical. “Because the AI said so,” isn’t a sufficient explanation.
What to do about it: Evaluate the use cases of all planned and ongoing AI projects to determine if explainability is mission critical.
The Necessity of Explainability
There are a variety of reasons that some AIs must be explainable.
Trust. Some end-users want to sanity check. The AI may be correct that an individual should invest in certain funds, but many people want to understand the rationale behind the advice.
Actionability. In some applications, the answer is useless without understanding why it is the answer. For example, an AI could create an accurate list of the 1,000 customers most likely to unsubscribe from your service next quarter, but unless you know why your customers are dissatisfied, you can’t do anything to retain them.
Accountability. If something goes wrong, someone is likely to be sued. Imagine if a person belonging to a protected minority is turned down for a mortgage. “Our AI makes great decisions” wouldn’t be a satisfactory defense in a discrimination suit. On the other hand, if the AI could explain that the applicant was turned down for late payments, defaults on credit card debt, or that his income couldn’t support the monthly payments, then the applicant would see that the decision was made on legitimate grounds.
Actual and Perceived Bias. AIs can produce inaccurate and unfair results that spring from unrecognized biases in the training data and the development environment. Bias related AI algorithm fails can be embarrassing to companies, like Nikon, whose camera consistently alerted the user that subjects of a particular ethnicity appeared to be squinting.
Engineering. If you don’t understand how the AI is making decisions, it is very difficult to debug the application during development.
Reliability. Without understanding the AI’s logic, it is impossible to know if it made the correct decisions for the correct reasons. An AI at the University of Washington was extremely successful at recognizing huskies vs wolves. Ultimately the researchers determined that the AI was using only one variable — the presence of snow in the photo.
Transparency. If a bank turns someone down for a loan, are they responsible for explaining why?
Explainability Paradigms
Creating an xAI is an extremely difficult technical challenge.
The various approaches currently under development generally fall into one of three categories:
- Deep Explanation: Develop new deep learning techniques that are better at explaining what’s happening under the hood.
- Interpretable Models: Use simpler models that humans are more likely to understand such as decision trees, similarity models, and knowledge graphs.
- Model Induction: Creates a companion AI that analyzes inputs and outputs and creates a human-understandable approximation of the black box.