Share @ LinkedIn Facebook  shap, interpret-ml-models
SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach [Python]

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Table of Contents

SHAP - SHapley Additive exPlanations

Machine learning models are commonly getting used to solving many problems nowadays and it has become quite important to understand the performance of these models. The classic ML metrics like accuracy, mean squared error, r2 score, etc does not give detailed insight into the performance of the model. We can have machine learning model which gives more than 90% accuracy for classification task but fails to recognize some classes properly due to imbalance data or model is actually detecting features which do not make sense to be used to predict particular class. There are many python libraries (eli5, LIME, SHAP, interpret,treeinterpreter etc) available which can be used to debug models to better understand a model and its performance on any sample of the data. These libraries can help us better understand which feature is contributing to how in prediction. A deep understanding of our ML models can help us decide the reliability of our ML models and whether it fit to be put into production.

As a part of this tutorial, we'll be concentrating on how to use SHAP to analyze the performance of machine learning models. The SHAP stands for SHapley Additive exPlanations and uses the approach of game theory to explain model predictions. It starts with some base value for prediction based on prior knowledge and then tries features of data one by one to understand the impact of the introduction of that feature on our base value to make the final prediction. It even takes into account orders of feature introduction as well as the interaction between features helping us better understand model performance. During this process, it records shap values which will be later used for plotting and explaining predictions. We'll be trying various machine learning tasks and then interpret a prediction by that models to further understand the performance of the model in-depth using SHAP.

The SHAP has a list of classes which can help us understand a different kind of machine learning models from many python libraries. These classes are commonly referred to as explainers. This explainer generally takes the ML model and data as input and returns an explainer object which has SHAP values which will be used to plot various charts explained later on. Below is a list of available explainers with SHAP.

  • AdditiveExplainer - This explainer is used to explain Generalized Additive Models.
  • BruteForceExplainer - This explainer uses the brute force approach to find shap values which will try all possible parameter sequence.
  • DeepExplainer - This explainer is designed for deep learning models created using Keras, TensorFlow, and PyTorch. It’s an enhanced version of the DeepLIFT algorithm where we measure conditional expectations of SHAP values based on a number of background samples. It's advisable to keep reasonable samples as background because too many samples will give more accurate results but will take a lot of time to compute SHAP values. Generally, 100 random samples are a good choice.
  • GradientExplainer - This explainer is used for differentiable models which are based on the concept of expected gradients which itself is an extension of the integrated gradients method.
  • KernelExplainer - This explainer uses special weighted linear regression to compute the importance of each feature and the same values are used as SHAP values.
  • LinearExplainer - This explainer is used for linear models available from sklearn. It can account for the relationship between features as well.
  • PartitionExplainer - This explainer calculates shap values recursively through trying a hierarchy of feature combinations. It can capture the relationship between a group of related features.
  • PermutationExplainer - This explainer iterates through all permutation of features in both forward and reverses directions. This explainer can take more time if tried with many samples.
  • SamplingExplainer - This explainer generates shap values based on assumption that features are independent and is an extension of an algorithm proposed in the paper "An Efficient Explanation of Individual Classifications using Game Theory".
  • TreeExplainer - This explainer is used for models that are based on a tree-like decision tree, random forest, gradient boosting.
  • CoefficentExplainer - This explainer returns model coefficients as shap values. It does not do any actual shap values calculation.
  • LimeTabularExplainer - This explainer simply wrap around LimeTabularExplainer from lime library. If you are interested in learning about lime then please feel free to check on our tutorial on the same from references section.
  • MapleExplainer - This explainer simply wraps MAPLE into shap interface.
  • RandomExplainer - This explainer simply returns random feature shap values.
  • TreeGainExplainer - This explainer returns global gain/Gini feature importances for tree models as shap values.
  • TreeMapleExplainer - This explainer provides a wrapper around tree MAPLE into shap interface.

We'll be primarily concentrating on LinearExplainer as a part of this tutorial which will be used to explain LinearRegression and LogisticRegression model predictions.

Below is a list of available charts with SHAP:

  • summary_plot - It creates a beeswarm plot of shap values distribution of each feature of the dataset.
  • decision_plot - It shows the path of how the model reached a particular decision based on shap values of individual features. The individual plotted line represents one sample of data and how it reached a particular prediction.
  • multioutput_decision_plot - Its decision plot for multi output models.
  • dependence_plot - It shows relationship between feature value (X-axis) and its shape values (Y-axis).
  • force_plot - It plots shap values using additive force layout. It can help us see which features most positively or negatively contributed to prediction.
  • image_plot - It plots shape values for images.
  • monitoring_plot - It helps in monitoring the behavior of the model over time. It monitors the loss of model overtime.
  • embedding_plot - It projects shap values using PCA for 2D visualization.
  • partial_dependence_plot - It shows basic partial dependence plot for a feature.
  • bar_plot - It shows a bar plot of shap values impact on the prediction of a particular sample.
  • waterfall_plot - It shows a waterfall plot explaining a particular prediction of the model based on shap values. It kind of shows the path of how shap values were added to the base value to come to a particular prediction.
  • text_plot - It plots an explanation of text samples coloring text based on their shap values.

We'll be explaining the majority of charts that are possible with a structured dataset as a part of this tutorial.

We'll start by importing the necessary libraries.

In [1]:
import pandas as pd
import numpy as np

import sklearn

import warnings
warnings.filterwarnings("ignore")

Structured Data : Regression

The first example that we'll use for explaining the usage of SHAP is the regression task on structured data.

Load Dataset

The dataset that we'll use for this task is the Boston housing dataset which is easily available from scikit-learn. We'll be loading the dataset and printing its description explaining various features present in the dataset. We have also loaded the dataset as a pandas dataframe. The target value that we'll predict is the median value of the owner-occupied home in 1000's dollar.

In [2]:
from sklearn.datasets import load_boston

boston = load_boston()

for line in boston.DESCR.split("\n")[5:28]:
    print(line)

boston_df = pd.DataFrame(data=boston.data, columns = boston.feature_names)
boston_df["Price"] = boston.target

boston_df.head()
**Data Set Characteristics:**

    :Number of Instances: 506

    :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target.

    :Attribute Information (in order):
        - CRIM     per capita crime rate by town
        - ZN       proportion of residential land zoned for lots over 25,000 sq.ft.
        - INDUS    proportion of non-retail business acres per town
        - CHAS     Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
        - NOX      nitric oxides concentration (parts per 10 million)
        - RM       average number of rooms per dwelling
        - AGE      proportion of owner-occupied units built prior to 1940
        - DIS      weighted distances to five Boston employment centres
        - RAD      index of accessibility to radial highways
        - TAX      full-value property-tax rate per $10,000
        - PTRATIO  pupil-teacher ratio by town
        - B        1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
        - LSTAT    % lower status of the population
        - MEDV     Median value of owner-occupied homes in $1000's

    :Missing Attribute Values: None
Out[2]:
CRIM ZN INDUS CHAS NOX RM AGE DIS RAD TAX PTRATIO B LSTAT Price
0 0.00632 18.0 2.31 0.0 0.538 6.575 65.2 4.0900 1.0 296.0 15.3 396.90 4.98 24.0
1 0.02731 0.0 7.07 0.0 0.469 6.421 78.9 4.9671 2.0 242.0 17.8 396.90 9.14 21.6
2 0.02729 0.0 7.07 0.0 0.469 7.185 61.1 4.9671 2.0 242.0 17.8 392.83 4.03 34.7
3 0.03237 0.0 2.18 0.0 0.458 6.998 45.8 6.0622 3.0 222.0 18.7 394.63 2.94 33.4
4 0.06905 0.0 2.18 0.0 0.458 7.147 54.2 6.0622 3.0 222.0 18.7 396.90 5.33 36.2

Divide Dataset Into Train/Test Sets, Train Model, and Evaluate Model

We'll first divide dataset into train (85%) and test (15%) sets using train_test_split() method available from scikit-learn. We'll then fit a simple linear regression model on train data. Once training is completed, we'll print the R2 score of the model on the train and test dataset. If you are interested in learning about various machine learning metrics and models then please feel free to check our tutorials on sklearn in the Machine Learning section of the website. Here's a link for a tutorial on ML metrics for easy review.

In [3]:
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression

X, Y = boston.data, boston.target

print("Total Data Size : ", X.shape, Y.shape)

X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.85, test_size=0.15, random_state=123, shuffle=True)

print("Train/Test Sizes : ",X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)

lin_reg = LinearRegression()
lin_reg.fit(X_train, Y_train)

print()
print("Test  R^2 Score : ", lin_reg.score(X_test, Y_test))
print("Train R^2 Score : ", lin_reg.score(X_train, Y_train))
Total Data Size :  (506, 13) (506,)
Train/Test Sizes :  (430, 13) (76, 13) (430,) (76,)

Test  R^2 Score :  0.6675760904888196
Train R^2 Score :  0.7524778368022297

We can notice from the above R2 values that our linear regression model is performing decent (though not that good). We'll now look at various charts provided by SHAP to understand model performance better by choosing a random sample from the test dataset.

The SHAP has been designed to generate charts using javascript as well as matplotlib. We'll be generating all charts using javascript backend. In order to do that, we'll need to call initjs() method on shap in order to initialize it.

In [ ]:
import shap

shap.initjs()

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Create LinearExplainer Object

At first, we'll need to create an explainer object in order to plot various charts explaining a particular prediction. We'll start by creating LinearExplainer which is commonly used for the linear model. It has the below-mentioned arguments:

  • model - It accepts the model which we trained with train data. It can even accept tuple of (coef, intercept) instead.
  • data - It accepts data based on which it'll generate SHAP values. We can provide a numpy array, pandas dataframe, scipy sparse matrix, etc. It can also accept tuple with (mean, cov).
  • feature_perturbation - It accepts one of the below strings.
    • interventional - It lets us compute SHAP values discarding the relationship between features.
    • correlation_dependent - It lets us compute SHAP values considering relationship between features.
  • nsamples - It accepts integer specifying a number of samples to use for calculating transformation matrix used to account for feature correlation when feature_perturbation is set to correlation_dependent.

Below we have created LinearExplainer by giving model and train data as input. This will create an explainer which does not take the relationship between features considering the correlation between features.

In [5]:
lin_reg_explainer1 = shap.LinearExplainer(lin_reg, X_train)

Below we have used explainer to generate shape value for the 0th sample from the test dataset using the shap_values() method of explainer. The explainer object has a base value to which it adds shape values for a particular sample in order to generate a final prediction. The base value is stored in the expected_value attribute of the explainer object. All model predictions will be generated by adding shap values generated for a particular sample to this expected value. Below we have printed the base value and then generated prediction by adding shape values to this base value in order to compare prediction with the one generated by linear regression.

In [6]:
sample_idx = 0

shap_vals = lin_reg_explainer1.shap_values(X_test[sample_idx])

print("Base Value : ", lin_reg_explainer1.expected_value)
print()
print("Shap Values for Sample %d : "%sample_idx, shap_vals)
print("\n")
print("Prediction From Model                            : ", lin_reg.predict(X_test[sample_idx].reshape(1,-1))[0])
print("Prediction From Adding SHAP Values to Base Value : ", lin_reg_explainer1.expected_value + shap_vals.sum())
Base Value :  22.356046511627905

Shap Values for Sample 0 :  [-4.89506079 -0.51516376  0.38384104 -0.09353663 -0.7209193  -2.20185995
 -0.07577931  3.32858846  4.24106936 -3.3262824  -1.62667618 -2.8628054
  1.36411231]


Prediction From Model                            :  15.355573935386687
Prediction From Adding SHAP Values to Base Value :  15.355573935386687

Below we have created another LinearExplainer by giving model and train data as input. We have also set feature_perturbation to correlation_dependent. This will create an explainer which takes into account the relationship between features.

In [7]:
lin_reg_explainer2 = shap.LinearExplainer(lin_reg, X_train, feature_perturbation="correlation_dependent")

In [8]:
sample_idx = 0

shap_vals = lin_reg_explainer2.shap_values(X_test[sample_idx].reshape(1,-1))[0]

print("Base Value : ", lin_reg_explainer2.expected_value)
print()
print("Shap Values for Sample %d : "%sample_idx, shap_vals)
print("\n")
print("Prediction From Model                            : ", lin_reg.predict(X_test[sample_idx].reshape(1,-1))[0])
print("Prediction From Adding SHAP Values to Base Value : ", lin_reg_explainer2.expected_value + shap_vals.sum())
Base Value :  22.356046511627905

Shap Values for Sample 0 :  [-6.9471973  -0.06970727 -0.36448797  0.02276717  1.20069023 -3.51372532
 -0.406848    0.41376433 -1.09573668 -0.75216956 -0.55386646 -3.96287921
  9.02892347]


Prediction From Model                            :  15.355573935386687
Prediction From Adding SHAP Values to Base Value :  15.355573935386648

We'll now explain how to plot various charts explained above one by one using both explainers created above.

Bar Plot

The bar plot shows the shap values of each feature for a particular sample of data. Below is a list of important parameters of the bar_plot() method of shap.

  • shap_values - It accepts an array of shap values for an individual sample of data.
  • feature_names - It accepts a list of feature names.
  • max_display - It accepts integer specifying how many features to display in a bar chart.

We can generate shap values by calling the shap_values() method of explainer object passing it samples for which we want to generate shap values. It'll return a list where each entry is a list of shap values for individual samples passed as data.

Below we are generating a bar chart of shap values from our first explainer.

In [ ]:
shap.bar_plot(lin_reg_explainer1.shap_values(X_test[0]),
              feature_names=boston.feature_names,
              max_display=len(boston.feature_names))

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

We can see from the above bar chart that for this sample of data features (CRIM, TAX, B, RM, PRATIO, NOX, ZN, CHAS, and AGE) contribute negatively and features (RAD, DIS, LSTAT, ZN) contributes positively for final prediction.

Below we have generated another bar plot of shap values for our second explainer which was based on the relationship between features.

In [ ]:
shap.bar_plot(lin_reg_explainer2.shap_values(X_test[0].reshape(1,-1))[0],
              feature_names=boston.feature_names,
              max_display=len(boston.feature_names))

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Waterfall Plot

The second chart that we'll explain is a waterfall chart which shows how shap values of individual features are added to the base value in order to generate a final prediction. Below is a list of important parameters of the waterfall_plot() method.

  • expected_value - It accepts base value on which shap values will be added. The explainer object has a property named expected_value which needs to be passed to this parameter.
  • shap_values - It accepts an array of shap values for an individual sample of data.
  • feature_names - It accepts a list of feature names.
  • max_display -It accepts integer specifying how many features to display in a bar chart.

Below we have generated a waterfall plot for the first explainer object which does not consider the interaction between objects.

In [ ]:
shap.waterfall_plot(lin_reg_explainer1.expected_value,
                    lin_reg_explainer1.shap_values(X_test[0]),
                    feature_names=boston.feature_names,
                    max_display=len(boston.feature_names),
                    )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated a waterfall plot for the second explainer object which does consider the interaction between objects. We can notice in shap values generated by both explainers as one considers relationship and one does not.

In [ ]:
shap.waterfall_plot(lin_reg_explainer2.expected_value,
                    lin_reg_explainer2.shap_values(X_test[0].reshape(1,-1))[0],
                    feature_names=boston.feature_names,
                    max_display=len(boston.feature_names))

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Decision Plot

The decision plot shows like the waterfall chart show the decision path followed by applying the shap values of individual features one by one to the expected value in order to generate predicted value as a line chart.

The decision plot can be used to show a decision path followed for more than one sample as well. Below is a list of important parameters of the decision_plot() method.

  • expected_value - It accepts base value on which shap values will be added. The explainer object has a property named expected_value which needs to be passed to this parameter.
  • shap_values - It accepts an array of shap values for an individual sample of data.
  • feature_names - It accepts a list of feature names.
  • feature_order - It accepts a list of below values as input and orders feature accordingly.

    • importance - Default Value. Orders feature according to the importance
    • hcluse - Hierarchical Clustering
    • none
    • list of array of indices
  • highlight - It accepts a list of indexes specifying which samples to highlight from the list of samples.

  • link - It accepts string specifying type of transformation used for the x-axis. It accepts one of the below values.

    • identity
    • logit
  • plot_color - It accepts matplotlib colormap to use to the color plot.

  • color_bar - It accepts boolean value specifying whether to display color bar or not.

Below we have drawn the decision plot of a single sample from the test dataset using the first linear explainer.

In [ ]:
shap.decision_plot(lin_reg_explainer1.expected_value,
                   lin_reg_explainer1.shap_values(X_test[0]),
                   feature_names=boston.feature_names.tolist(),
                   )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have created another decision plot of 5 samples from the test dataset using the first linear explainer. We have also highlighted 2nd and 3rd samples from a dataset with different line styles.

In [ ]:
shap.decision_plot(lin_reg_explainer1.expected_value,
                   lin_reg_explainer1.shap_values(X_test[0:5]),
                   feature_names=boston.feature_names.tolist(),
                   highlight=[1, 2],
                   )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Dependence Plot

The dependence plot shows the relation between actual feature value and shap values for a particular feature of the dataset. We can generate a dependence plot using the dependence_plot() method. Below is a list of important parameters of the dependence_plot() method.

  • ind - It accepts either integer specifying the index of feature from data or string specifying the name of the feature. For future names given as a string, we need to provide feature names as a list to parameter feature_names.
  • shap_values - It accepts an array of shap values for an individual sample of data.
  • features - It accepts dataset which was used to generate shap values given to the shap_values parameter.
  • feature_names - It accepts a list of feature names.

Below we have generated a dependence plot for the CRIM feature using our first linear explainer. It's also showing the interaction of feature with feature AGE whose values are shown as a color bar.

In [ ]:
shap.dependence_plot("CRIM",
                     lin_reg_explainer1.shap_values(X_test),
                     features=X_test,
                     feature_names=boston.feature_names,
                     )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated a dependence plot of feature CRIM using the test dataset and second linear explainer created earlier.

In [ ]:
shap.dependence_plot("CRIM",
                     lin_reg_explainer2.shap_values(X_test),
                     features=X_test,
                     feature_names=boston.feature_names,
                     )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Embedding Plot

The embedding plot projects shap values to 2D projection using PCA for visualization. This can help us see the spread of different shap values for a particular feature.

We can generate an embedding plot using the embedding_plot() method. Below is a list of important parameters of the embedding_plot() method.

  • ind - It accepts either integer specifying the index of feature from data or string specifying the name of the feature. For future names given as a string, we need to provide feature names as a list to parameter feature_names.
  • shap_values - It accepts an array of shap values for an individual sample of data.
  • feature_names - It accepts a list of feature names.
  • method - It accepts string pca or numpy array as input. If pca is given then use PCA to generate 2D projection. If a numpy array is given then its size should be (no_of_sample x 2) and will be considered embedding values.

Below we have generated an embedding plot for the CRIM feature on test data using our first linear explainer.

In [ ]:
shap.embedding_plot("CRIM",
                    lin_reg_explainer1.shap_values(X_test),
                    feature_names=boston.feature_names)

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated an embedding plot for the CRIM feature on test data using our second linear explainer.

In [ ]:
shap.embedding_plot("CRIM",
                    lin_reg_explainer2.shap_values(X_test),
                    feature_names=boston.feature_names)

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Force Plot

The force plot shows shap values contributions in generating final prediction using an additive force layout. It shows which features contributed to how much positively or negatively to base value to generate a prediction.

We can generate force plot using force_plot() method. Below are list of important parameters for force_plot() method.

  • expected_value - It accepts base value on which shap values will be added. The explainer object has a property named expected_value which needs to be passed to this parameter.
  • shap_values - It accepts an array of shap values for an individual sample of data.
  • feature_names - It accepts a list of feature names.
  • out_names - It accepts string specifying target variable name.

Below we have generated a force plot of the first test sample using the first linear explainer. We can see the magnitude of positivity and negativity of features in the chart.

In [ ]:
shap.force_plot(lin_reg_explainer1.expected_value,
                lin_reg_explainer1.shap_values(X_test[0]),
                feature_names=boston.feature_names,
                out_names="Price($)")

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated a force plot of the first test sample using the second linear explainer. We can see that the above RAD feature was contributing more negatively to prediction and here LSTAT is contributing more negatively whereas RAD is contributing positively. The second linear explainer considers the relation between features hence results are different.

In [ ]:
shap.force_plot(lin_reg_explainer2.expected_value,
                lin_reg_explainer2.shap_values(X_test[0].reshape(1,-1))[0],
                feature_names=boston.feature_names,
                out_names="Price($)")

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated a force plot of 10 samples of the dataset using the first linear explainer. It also provides us with a dropdown on Y-axis which we can change to see the impact of the individual feature on all 10 predictions. In this chart, y-axis values represent predicted values for each sample and the x-axis represents 10 samples from 0-9.

In [ ]:
shap.force_plot(lin_reg_explainer1.expected_value,
                lin_reg_explainer1.shap_values(X_test[0:10]),
                feature_names=boston.feature_names,
                out_names="Price($)", figsize=(25,3),
                link="identity")

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Summary Plot

The summary plot shows the beeswarm plot showing shap values distribution for all features of data. We can also show the relationship between the shap values and the original values of all features.

We can generate summary plot using summary_plot() method. Below are list of important parameters of summary_plot() method.

  • shap_values - It accepts array of shap values for individual sample of data.
  • features - It accepts dataset which was used to generate shap values given to shap_values parameter.
  • feature_names - It accepts list of feature names.
  • max_display -It accepts integer specifying how many features to display in bar chart.
  • plot_type - It accepts one of the below strings as input.
    • dot (default for single output)
    • bar - (default for multiple output)
    • violin

Below we have generated a summary plot of shap values generated from the test dataset using the first linear explainer. We can see a distribution of shap values and their relation with actual feature values based on the color bar on the right side.

In [ ]:
shap.summary_plot(lin_reg_explainer1.shap_values(X_test),
                  features = X_test,
                  feature_names=boston.feature_names)

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated a summary plot with plot type as bar based on shape values generated from test data using the first linear explainer. The bar chart shows the average impact of each feature on the final prediction. This also highlights feature importance based on shap values.

In [ ]:
shap.summary_plot(lin_reg_explainer1.shap_values(X_test),
                  feature_names=boston.feature_names,
                  plot_type="bar",
                  color="dodgerblue"
                  )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated a summary plot with plot type as violin based on shape values generated from test data using the first linear explainer.

In [ ]:
shap.summary_plot(lin_reg_explainer1.shap_values(X_test),
                  feature_names=boston.feature_names,
                  plot_type="violin",
                  color="tomato")

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Partial Dependence Plot

The shap also provides us with a method named partial_dependence_plot() which can be used to generate a partial dependence plot. Below are list of important parameters of partial_dependence_plot() method.

  • ind - It accepts either integer specifying the index of feature from data or string specifying the name of the feature. For future names given as a string, we need to provide feature names as a list to parameter feature_names.
  • model - It expects a method that predicts the output of the model.
  • features - It’s data that will be used for generating the plot.
  • feature_names - It accepts a list of feature names.

Below we have generated a partial dependence plot of the LSTAT feature based on test data.

In [ ]:
shap.partial_dependence_plot("LSTAT",
                             lin_reg.predict,
                             features=X_test,
                             feature_names=boston.feature_names,
                             model_expected_value=True,
                             feature_expected_value=True,
                             ice=True
                             )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Structured Data : Classification

The second example that we'll use for explaining linear explainer is a classification task on structured data.

Load Dataset

The dataset that we'll use for this task is the wine classification dataset which is easily available from scikit-learn. We'll be loading the dataset and printing its description explaining various features present in the dataset. We have also loaded the dataset as a pandas dataframe. The target value that we'll predict is a class of wine. The dataset has information about three different types of wines.

In [26]:
from sklearn.datasets import load_wine

wine = load_wine()

for line in wine.DESCR.split("\n")[5:28]:
    print(line)

boston_df = pd.DataFrame(data=wine.data, columns = wine.feature_names)
boston_df["WineType"] = wine.target

boston_df.head()
**Data Set Characteristics:**

    :Number of Instances: 178 (50 in each of three classes)
    :Number of Attributes: 13 numeric, predictive attributes and the class
    :Attribute Information:
 		- Alcohol
 		- Malic acid
 		- Ash
		- Alcalinity of ash
 		- Magnesium
		- Total phenols
 		- Flavanoids
 		- Nonflavanoid phenols
 		- Proanthocyanins
		- Color intensity
 		- Hue
 		- OD280/OD315 of diluted wines
 		- Proline

    - class:
            - class_0
            - class_1
            - class_2
Out[26]:
alcohol malic_acid ash alcalinity_of_ash magnesium total_phenols flavanoids nonflavanoid_phenols proanthocyanins color_intensity hue od280/od315_of_diluted_wines proline WineType
0 14.23 1.71 2.43 15.6 127.0 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065.0 0
1 13.20 1.78 2.14 11.2 100.0 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050.0 0
2 13.16 2.36 2.67 18.6 101.0 2.80 3.24 0.30 2.81 5.68 1.03 3.17 1185.0 0
3 14.37 1.95 2.50 16.8 113.0 3.85 3.49 0.24 2.18 7.80 0.86 3.45 1480.0 0
4 13.24 2.59 2.87 21.0 118.0 2.80 2.69 0.39 1.82 4.32 1.04 2.93 735.0 0

Divide Dataset Into Train/Test Sets, Train Model, and Evaluate Model

Below we have divided the wine dataset into train & test sets, trained logistic regression model on train data and then evaluated on test data by printing accuracy of it.

In [27]:
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression

X, Y = wine.data, wine.target

print("Total Data Size : ", X.shape, Y.shape)

X_train, X_test, Y_train, Y_test = train_test_split(X, Y, train_size=0.85, test_size=0.15, stratify=Y, random_state=123, shuffle=True)

print("Train/Test Sizes : ",X_train.shape, X_test.shape, Y_train.shape, Y_test.shape)

log_reg = LogisticRegression()
log_reg.fit(X_train, Y_train)

print()
print("Test  Accuracy : ", log_reg.score(X_test, Y_test))
print("Train Accuracy : ", log_reg.score(X_train, Y_train))
Total Data Size :  (178, 13) (178,)
Train/Test Sizes :  (151, 13) (27, 13) (151,) (27,)

Test  Accuracy :  1.0
Train Accuracy :  0.9735099337748344

Create LinearExplainer Object

Below we have created the LinearExplainer object by passing the logistic regression model and train data as input. Please make a note that we are not taking the relation between features this time by not setting the feature_perturbation attribute. The default value for feature_perturbation is interventional

In [28]:
log_reg_explainer = shap.LinearExplainer(log_reg, data=X_train)

Below we are generating shap values for the 0th sample of test data. As this is a multi-class classification task the base value will be three different values which are the same as a number of classes in data. The shape values generated by the explainer will also be a list of three arrays which will have shape values for each class. We are again adding shap values for each class to the expected (base) value of each class which will generate three different values, unlike the regression task which only generates one. We then take an index of value which is highest to be a class prediction.

In [29]:
sample_idx = 0

shap_vals = log_reg_explainer.shap_values(X_test[sample_idx])

val1 = log_reg_explainer.expected_value[0] + shap_vals[0].sum()
val2 = log_reg_explainer.expected_value[1] + shap_vals[1].sum()
val3 = log_reg_explainer.expected_value[2] + shap_vals[2].sum()

print("Base Value : ", log_reg_explainer.expected_value)
print()
print("Shap Values for Sample %d : "%sample_idx, shap_vals)
print("\n")
print("Prediction From Model                            : ", \
                      wine.target_names[log_reg.predict(X_test[sample_idx].reshape(1, -1))[0]])
print("Prediction From Adding SHAP Values to Base Value : ", wine.target_names[np.argmax([val1, val2, val3])])
Base Value :  [-1.63280543 -2.43837392 -2.80941737]

Shap Values for Sample 0 :  [array([ 5.01766680e-01, -6.35728646e-01,  1.95585587e-01, -1.83294961e+00,
        3.65559931e-01,  2.31267253e-02,  3.68708398e-01,  6.45461659e-03,
        1.87405116e-01,  2.26119493e-01,  4.25745701e-03,  4.73488107e-01,
       -5.42176905e+00]), array([-0.77396086,  0.97372722, -0.11301672,  0.78788259, -0.09998906,
        0.04190675,  0.12745907,  0.01314302, -0.29610505,  3.88451769,
       -0.01524022,  0.16708218,  4.69835468]), array([ 0.3775482 , -0.60394957,  0.02194958,  0.35738211, -0.4570523 ,
       -0.15318716, -0.53509376, -0.00723964,  0.36540575, -2.21937331,
        0.0133202 , -0.64983357, -0.10365764])]


Prediction From Model                            :  class_1
Prediction From Adding SHAP Values to Base Value :  class_1

We'll now explain how to plot various charts for the classification tasks.

Bar Plot

Below we have plotted 3 bar plot of shap values of the 0th test sample. As we explained earlier, its a multi-class classification problem hence the shap_values() method will return shap values for each class of data. We have plotted shap value for all class types to show how different feature's shap values contribute to each class type differently.

In [ ]:
shap.bar_plot(log_reg_explainer.shap_values(X_test[0])[0], feature_names=wine.feature_names, max_display=len(wine.feature_names))
shap.bar_plot(log_reg_explainer.shap_values(X_test[0])[1], feature_names=wine.feature_names, max_display=len(wine.feature_names))
shap.bar_plot(log_reg_explainer.shap_values(X_test[0])[2], feature_names=wine.feature_names, max_display=len(wine.feature_names))

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Waterfall Plot

Below we have generated 3 waterfall charts for the 0th sample of test data. We can see that the second chart has the highest value after adding shap values to the expected base value hence prediction is class_1.

In [ ]:
shap.waterfall_plot(log_reg_explainer.expected_value[0],
                    log_reg_explainer.shap_values(X_test[0])[0],
                    feature_names=wine.feature_names,
                    max_display=len(wine.feature_names)),

shap.waterfall_plot(log_reg_explainer.expected_value[1],
                    log_reg_explainer.shap_values(X_test[0])[1],
                    feature_names=wine.feature_names,
                    max_display=len(wine.feature_names)),

shap.waterfall_plot(log_reg_explainer.expected_value[2],
                    log_reg_explainer.shap_values(X_test[0])[2],
                    feature_names=wine.feature_names,
                    max_display=len(wine.feature_names))

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Decision Plot

Below we have generated a decision plot for 0th sample of test data. We have also highlighted the actual prediction. Please make a note that we have used the multioutput_decision_plot() method for generating a plot for this case instead of decision_plot().

In [ ]:
shap.multioutput_decision_plot(log_reg_explainer.expected_value.tolist(),
                               log_reg_explainer.shap_values(X_test),
                               row_index=0,
                               feature_names=wine.feature_names,
                               highlight = [1]
                               )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Dependence Plot

Below we have generated a dependence plot for the proline feature. We have generated 3 dependence plots using 3 different shap values based on a different classes.

In [ ]:
shap.dependence_plot("proline",
                     log_reg_explainer.shap_values(X_test)[0],
                     features=X_test,
                     feature_names=wine.feature_names,
                     )

shap.dependence_plot("proline",
                     log_reg_explainer.shap_values(X_test)[1],
                     features=X_test,
                     feature_names=wine.feature_names,
                     )

shap.dependence_plot("proline",
                     log_reg_explainer.shap_values(X_test)[2],
                     features=X_test,
                     feature_names=wine.feature_names,
                     )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Embedding Plot

Below we have generated 3 different embedding plots for the proline feature based on test data.

In [ ]:
shap.embedding_plot("proline", log_reg_explainer.shap_values(X_test)[0], feature_names=wine.feature_names),
shap.embedding_plot("proline", log_reg_explainer.shap_values(X_test)[1], feature_names=wine.feature_names),
shap.embedding_plot("proline", log_reg_explainer.shap_values(X_test)[2], feature_names=wine.feature_names)

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Force Plot

Below we have generated 3 different force plots based on 3 different shape values and base values for sample 0 of the test dataset. We can see which features contributed how much to the final prediction.

In [ ]:
shap.force_plot(log_reg_explainer.expected_value[0],
                log_reg_explainer.shap_values(X_test[0])[0],
                feature_names=wine.feature_names,
                out_names="Wine Type")

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

In [ ]:
shap.force_plot(log_reg_explainer.expected_value[1],
                log_reg_explainer.shap_values(X_test[0])[1],
                feature_names=wine.feature_names,
                out_names="Wine Type")

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

In [ ]:
shap.force_plot(log_reg_explainer.expected_value[2],
                log_reg_explainer.shap_values(X_test[0])[2],
                feature_names=wine.feature_names,
                out_names="Wine Type")

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated a force plot for 10 samples of test dataset and have used the shap and the expected value of the only first class.

In [ ]:
shap.force_plot(log_reg_explainer.expected_value[0],
                log_reg_explainer.shap_values(X_test[:10])[0],
                feature_names=wine.feature_names,
                out_names="Wine Type", figsize=(25,3),
                link="identity")

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Summary Plot

The summary plot can handle multi-class shap values. Below we have generated a summary plot of test data and it defaults to a bar chart for multi-class problems. We can see how much each attribute contributes on average for each class type.

In [ ]:
shap.summary_plot(log_reg_explainer.shap_values(X_test),
                  feature_names=wine.feature_names)

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Below we have generated a summary plot from the shap values generated for class 1 from test data.

In [ ]:
shap.summary_plot(log_reg_explainer.shap_values(X_test)[1],
                  features=X_test,
                  feature_names=wine.feature_names)

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach

Partial Dependence Plot

Below we have generated a partial dependence plot of the proline feature based on test data.

In [ ]:
shap.partial_dependence_plot("proline",
                             log_reg.predict,
                             features=X_test,
                             feature_names=wine.feature_names,
                             model_expected_value=True,
                             feature_expected_value=True,
                             ice=True,
                             )

SHAP - Explain Machine Learning Model Predictions using Game Theoretic Approach



Sunny Solanki  Sunny Solanki