Aporia Raised $25M Series A to Build Trust in AI 🎉 Read More
Aporia Raised $25M Series A 🎉 Read More

A Hands-On Guide to Explainability

Share on linkedin
Share on twitter
Share on facebook

As more and more businesses adopt machine learning to support their decision-making processes, the need to fully understand how ML models come to their predictions is critical. Building and scaling model production is no longer enough to improve results. These models need to be transparent to understand why they make specific predictions.

Data scientists, machine learning engineers, and domain experts lack the ability to dive deep into their models and examine the “why?”. They are limited to basic insights from standard summary metrics like performance charts and descriptive statistics. Being able to explain model predictions is fundamental for enabling further testing, experimentation, improved performance, and smarter decision-making. In a time when there is fairly high compute power and more sophisticated algorithms, model accuracy is no longer the bane of the data scientist. The new challenge is understanding and being able to explain why the model performs the way it does and what features are important. This is where Model Explainability comes in.

In this article, you’ll learn:
  • What is explainability?
  • Why is explainability important?
  • How to achieve model explainability using Aporia

 

What is Explainability?

Explainability in machine learning is the ability to understand the output of a model based on the data features, the algorithm used, and the environment of the model in question, in a human-comprehensible way. Basically, it’s a broad concept of analyzing and understanding the results provided by ML models. It’s the solution to the concept of “black-box” models that says it is difficult to understand how models arrive at specific decisions. Another phrase used to address this concept is Explainable AI(XAI), which describes a set of methods and tools that allow humans to comprehend and trust the results and output created.

 

 

It’s important to point out that explainability isn’t just for machine learning engineers or data scientists alone – it’s for everyone. Any explanation of the model should be understandable for everyone – regardless of whether they’re a data scientist, business owner, customer, or user. Therefore, it should be both simple and information-rich.

Now to the next question, why is explainability important in machine learning?

Why You Need Explainability for Your ML Models

  1. Trust: People generally trust things that they are familiar with or have pre-existing knowledge of. Therefore, if they don’t understand the inner workings of a model, they can’t trust it, especially in high-risk domains like healthcare or finance. It is impossible to trust a machine learning model without understanding how and why it makes its decisions and whether these decisions are justified.
  2. Regulations and Compliance: Regulations to protect consumers of technology require that a strong level of explainability be achieved before the general public can use it. For instance, the European Union regulation 679 gives consumers the “right to an explanation of the decision reached after such assessment and to challenge the decision” if it was affected by AI algorithms. Also, data scientists, auditors, and business decision-makers alike must ensure their AI complies with company policies, industry standards, and government regulations.
  3. ML Fairness and Bias: When it comes to correcting model fairness and bias, there is really no way to detect where it’s coming from in the data without model explainability. And with the prevalence of bias and vulnerabilities in ML models, understanding how a model works is the first priority before it can be deployed to production.
  4. Debugging: Without understanding the “bug” feature or algorithm, it would be impossible to achieve the desired output. Hence, model explainability is essential in debugging a model during its development phase.
  5. Enhanced Control: When you understand how your models work, you get to see unknown vulnerabilities and flaws. Then the ability to rapidly identify and correct mistakes in low-risk situations becomes easy to do.
  6. Ease of Understanding and the Ability to Question: Understanding how the model’s features affect the model output helps you further question and improve the model.

After considering these reasons why explainability is important, it’s essential to understand the scope of explainability.

 

Explainability Approaches

There are three different approaches to model explainability:
  1. Global
  2. Local
  3. Cohort

Global Explainability Approach:

The Global Approach explains the model’s behavior holistically. Global explainability helps you know which features in the model contribute to the model’s overall predictions. During model training, the global explainability provides information about what features the model used in making decisions to stakeholders. For example, the product teams looking at a recommendation model might want to know which features (relationships) motivate or engage customers most.

Local Explainability Approach:

Local interpretation helps understand the model’s behavior in the local neighborhood, i.e. It gives an explanation about each feature in the data and how each feature individually contributes to the model’s prediction.

Local explainability helps in finding the root cause of a particular issue in production. It can also be used to help you discover which features are most impactful in making model decisions. This is important, especially in industries like finance and health where the individual features are almost as important as all features combined. For example, imagine your credit risk model rejected an applicant for a loan. With Local explainability, you can know why this decision was made and how to better advise the applicant. It also helps in understanding the suitability of the model for deployment.

Cohort Explainability Approach:

Somewhere between global and local explainability lies Cohort (or Segment) explainability. This explains how segments or slices of data contribute to the model’s prediction. During model validation, Cohort explainability helps explain the differences in how a model is predicting between a cohort where the model is performing well versus a cohort where the model is performing poorly. It also assists when trying to explain outliers as outliers occur in a local neighborhood or data slice.

Note: Both Local and Cohort (Segment) explainability can be used to explain outliers.

There are various explainability methods such as Shap, Partial Dependence Plot, LIME, ELI5

One question that comes to mind when dealing with explainability is: Which parts of the model are being explained and why does that part matter? Let’s look into this question…

Which parts of the model are being explained and why does a particular part matter?
  1. Features: The features of the model are typically the primary source of model explanation as they make up the main components of the model.
  2. Data characteristics: These can include: data format, data integrity, etc. Production models are constantly changing, so it is important to log and monitor those changes to better understand and explain the model’s output. Data distribution shifts can impact model predictions, so maintaining the data distribution and having a good understanding of the data characteristics is important for model explainability.
  3. Algorithms: The choice of algorithms and techniques used when training a model is as important as the data itself. These algorithms define how the features interact and combine to achieve the model output. A clear understanding of the training algorithms and techniques is essential for achieving model explainability.

In order to achieve explainability, you need tools that can explain your model both globally and locally.

 

How to Achieve Explainability with Aporia

Aporia’s full-stack ML observability solution gives data scientists and ML engineers the visibility, monitoring and automation, investigation tools, and explainability to understand why models predict what they do, how they perform in production over time, and where they can be improved.

Using Aporia’s Explainable AI Tool

To understand how the explainability feature works in Aporia, login into Aporia with your email – or if you haven’t created an account yet, you can easily sign up here to Aporia’s free community plan.

Go to the demo model, and from there to the Data Points dashboard. Next click the Explain button.

For this model, you can see how features contributed to the model’s prediction.

 

You can also get a business explanation to share with key stakeholders.

 

You can also change any feature value by clicking “Re-Explain” and see how it affects the prediction. This lets you debug your model for specific predictions.

 

Aporia’s explainability feature lets you look under the hood of your model and better understand:

  • The prediction across all features in the dataset, i.e. Global explainability
  • The individual contribution of each feature to the model’s prediction, i.e. Local explainability
  • Segment explainability

As machine learning models continue to be adopted across all industries and are fast becoming a standard critical component of the decision-making process for organizations, the idea that ML models are “black boxes” will be debunked. Model predictions can be explained with explainable AI tools like Aporia.

Aporia makes ML models explainable, helping data science and ML teams to better understand their models and leverage their machine learning in a more effective and responsible way.

Happy explaining!

Share on linkedin
Share on twitter
Share on facebook

You may also like

Data Science, MLOps
When you ask machine learning (ML) engineers about their biggest challenges, monitoring and observability often tops the list. There are …
May 8, 2022
Data Science, MLOps, MLOps 101
For more and more data science teams, feature stores are becoming an essential part of their ML pipeline. If your …
December 16, 2021
How-To, MLOps, MLOps 101
Machine learning and big data are becoming ever more prevalent, and their impact on society is constantly growing. Numerous industries …
November 29, 2021