You are currently viewing What is XAI? – An Easy Guide to Explainable Artificial Intelligence

What is XAI? – An Easy Guide to Explainable Artificial Intelligence

  • Post category:Blog
  • Post last modified:14 February 2024

What is Explainable AI (XAI)?

Explainable AI (XAI) is a set of techniques that help people understand how AI systems make decisions. This is important because it can help us to trust AI systems more, and to make sure that they are fair and unbiased.

Why is Explainable AI Important?

AI systems are becoming increasingly common in our lives, and they are being used to make important decisions, such as whether to approve a loan or whether to release someone from prison. However, many AI systems are like “black boxes” – we don’t know how they make their decisions. This can make it difficult to trust these systems, and it can also lead to problems such as bias and discrimination.

XAI can help to address these problems by making AI systems more transparent. With XAI, we can understand how AI systems make decisions, and we can identify and fix any problems.

How Does Explainable AI Work?

There are many different XAI techniques, but they all work in basically the same way. They take an AI system and try to explain how it works. This can be done by looking at the data that the AI system is trained on, or by looking at the decisions that the AI system makes.

Some of the most common XAI techniques include:

  • LIME and SHAP: These techniques help to explain how an AI system makes a particular decision. They do this by looking at the data that the AI system used to make the decision, and by identifying the features that were most important to the decision.
  • Decision Trees: These are a type of machine learning model that can be easily visualized. This makes them a good choice for explaining AI systems that are based on decision trees.
  • Bayesian Networks: These are a type of graphical model that can show the relationships between different variables. This can be helpful for explaining AI systems that are based on Bayesian networks.

Let’s dive into SHAP in simpler terms:

  • SHAP stands for Shapley Additive exPlanations. It’s like a detective helping us understand how machine learning models make their predictions. Imagine you have a friend who’s really good at guessing whether it’ll rain tomorrow. You ask them, “Why do you think it’ll rain?” They say, “Well, the dark clouds and strong wind tell me so.” You get it—they’re using clues to make their guess.
  • Now, regular AI models are like secret agents. They predict things, but they don’t explain why. It’s like your friend saying, “It’ll rain because… well, I can’t tell you.” Not very helpful, right?
  • SHAP changes that! It’s like giving your friend a special notebook where they write down their weather-guessing rules. They say, “When clouds are dark, wind is strong, and humidity is high, it’ll rain.” Now you understand their reasoning! SHAP helps AI models share their secret rules with us.
  • Some cool things about SHAP:
    1. Waterfall Plot: Imagine a waterfall with rocks. Each rock represents a feature (like cloudiness or wind). The height of each rock shows how much that feature affects the prediction. If a big rock (like dark clouds) is high, it matters a lot. If a small rock (like humidity) is low, it matters less.
    2. Shapley Values: These are like scores for each feature. They tell us how important each clue (feature) is in making the prediction. If dark clouds get a high score, they’re crucial for predicting rain.
  • So, SHAP helps us understand AI better, like having a friendly detective who explains how the magic box (AI model) works.

How Does LIME Work?

Sampling and Surrogate Dataset: First, LIME takes a prediction model and a test sample (like an image). It creates a bunch of similar samples by tweaking the features a little. Think of it as making copies of the same picture with tiny changes.

  • Feature Selection: Next, LIME picks the most important features from these copies. It’s like saying, “Okay, dark clouds matter a lot, wind matters a bit, and humidity matters a little.”
  • Local Explanations: Finally, LIME explains the prediction for that specific sample. It’s like your friend saying, “For this picture, dark clouds are the main clue for rain.” These explanations are locally faithful—they work around the specific sample you’re interested in.
  1. Why Is LIME Cool?
    • It’s like having a detective who explains how the magic box (AI model) works. We can trust it more because we know the rules it follows.
    • Plus, LIME can handle almost any AI model out there—it’s like a universal translator for AI secrets!

So, LIME helps us understand AI better, just like your weather-savvy friend explaining their guesses!

What exactly are Decision Trees?

Following the same example as the above. You ask yourself questions like, “Is it sunny?” or “Is it raining?” These questions help you make a decision.

  • Well, decision trees work similarly! They’re like flowcharts that help AI models make decisions. Instead of asking about weather in general, they ask specific questions about data features (like temperature, humidity, or wind speed).
  1. How Do Decision Trees Work?
  • Imagine a tree with branches. Each branch represents a question, and each leaf (end of a branch) gives an answer.
  • Here’s how it works:
    • Root Node: The top of the tree—it’s where everything starts. It represents the whole dataset.
    • Decision/Internal Nodes: These are like the branches. Each node asks a question about a feature (e.g., “Is humidity high?”).
    • Leaf/Terminal Nodes: These are the end points. They give you an answer (like “Play outside” or “Stay indoors”).
    • Splitting: The tree splits at each node based on feature values. For example, if humidity is high, it goes one way; if low, it goes another.
    • Parent and Child Nodes: Nodes split into child nodes. It’s like a family tree!
    • Impurity: This measures how mixed up the answers are. We want pure leaves (all “Play outside” or all “Stay indoors”).
  1. Why Are Decision Trees Cool?
  • They’re easy to understand—like following a recipe or flowchart.
  • They work for both classification (sorting things into groups) and regression (predicting numbers).
  • Plus, they handle different types of features (like numbers or categories).

So, decision trees are like friendly guides helping us make choices based on data.

Some of the top use cases of XAI

  1. Data Protection: The European Union and their General Data Protection Regulation have a ‘right to explanation’ clause, which will require the use of XAI to arrive at a decision.
  2. Medical: XAI has an application in the Clinical Decision Support System (CDSS), in healthcare, where a system is being built to predict a diagnosis for patients by just looking at the medical records.
  3. Defense: XAI in military practices becomes important because lethal autonomous weapon systems (LAWS) can cause less damage if they can differentiate between a civilian and a combatant.
  4. Banking: In the banking sector regulator would like to look at overall business volumes and the amount of suspicious activities reported. Any ratio that is outside the industry norm will need regulatory investigation. In such cases, XAI will help in reducing false positives.
  5. Finance: XAI uses more data and improved algorithms to identify worthy borrowers that legacy models would have overlooked. As the decision-making factors become more transparent with XAI, it serves as a crucial ethical filter for decisions. XAI has also found various use cases in event-critical sectors such as Finance, Legal, and automation. XAI can be used to improve product quality, optimize production processes, and reduce costs in manufacturing. In autonomous vehicles, XAI is crucial for ensuring safety and building user trust. Fraud detection is another area where XAI can help identify fraudulent transactions and explain why a transaction is considered fraudulent.

What are the Benefits of Explainable AI?

There are many benefits to using XAI. Some of the most important benefits include:

  • Increased trust: When people understand how AI systems work, they are more likely to trust them.
  • Reduced bias: XAI can help to identify and fix bias in AI systems.
  • Improved fairness: By making AI systems more transparent, we can make sure that they are fair to everyone.
  • Better decision-making: When we understand how AI systems make decisions, we can make better decisions about how to use them.

What are the Challenges of Explainable AI?

There are also some challenges to using XAI. Some of the most important challenges include:

  • Complexity: Some AI systems are very complex, and it can be difficult to explain how they work.
  • Accuracy: XAI techniques are not always perfect, and they can sometimes provide inaccurate explanations.
  • Privacy: Explaining how an AI system works can sometimes reveal sensitive information.

The Future of Explainable AI

XAI is a rapidly developing field, and new techniques are being developed all the time. As XAI continues to develop, it is likely to play an increasingly important role in our lives.