Interpretability Analysis of Deep SHAP Deep Learning Model

1 Introduction

The Deep SHAP algorithm is a technique for explaining deep learning models by associating the importance of each feature with an output value. Below we will introduce the principle of Deep SHAP in detail.

1.1 Origin

Deep SHAP (SHapley Additive exPlanation) is a model interpretation technique proposed by Luca Ancona et al. in 2017. It is based on the idea of ​​the Shapley value, which is a dispersion measure used to explain the influence of features on the output in complex models.

1.2 Basic idea

The core idea of ​​Deep SHAP is to give a measure of the local contribution of each feature to the decision function, and the size of the local contribution of this feature is determined according to the Shapley value in all possibilities of all feature combinations.

Guess you like

Origin blog.csdn.net/u013537270/article/details/131209937