Common methods of interpretability analysis (XAI)

1、SHAP (SHapley Additive exPlanations)

SHAP is a game theory method that can be used to interpret the output of any machine learning model. It uses the classic Shapley value from game theory and its related extensions to relate optimal credit allocation to local interpretations.

2、LIME(Local Interpretable Model-agnostic Explanations)

LIME is a model-agnostic method that works by locally approximating the behavior of a model around specific predictions. LIME attempts to explain what a machine learning model is doing. LIME supports interpreting individual predictions for text classifiers, classifiers for tabular data, or images.

3、Eli5

ELI5 is a Python package that helps debug machine learning classifiers and interpret their predictions.

4、Shapash

Shapash provides several types of visualizations that make it easier to understand the model. Use the summary to understand the decisions proposed by the model. This project is developed by MAIF data scientists. Shapash mainly explains the model through a set of excellent visualizations.

5、Anchors

Anchors explain the behavior of complex models using high-precision rules called anchors, which represent local "sufficient" prediction conditions. The algorithm can efficiently compute the explanation of any black-box model with high probability guarantees.

6、BreakDown

BreakDown is a tool that can be used to interpret linear model predictions. It works by decomposing the model's output into the contribution of each input feature. There are two main methods in this package. Explainer() and Explanation()

7、Interpret-Text

Interpret-Text combines community-developed interpretability techniques for NLP models with visualization panels for viewing results. Experiments can be run on multiple state-of-the-art interpreters and analyzed comparatively. This toolkit can interpret machine learning models globally on each tag or locally on each document.

8、aix360 (AI Explainability 360)

The AI ​​Explainability 360 toolkit is an open source library. This package was developed by IBM and is widely used on their platform. AI Explainability 360 contains a comprehensive set of algorithms covering different dimensions of explanation as well as agent explainability metrics.

9、OmniXAI

OmniXAI (short for Omni explable AI), solves several problems in interpreting judgments produced by machine learning models in practice.

10、XAI (explainable AI)

The XAI library is maintained by The Institute for Ethical AI & ML and is developed based on the 8 principles of Responsible Machine Learning. It's still in alpha so please don't use it for production workflows.

references

1.Recommended collection! Share 10 Python libraries for explaining AI decisions!

おすすめ

転載: blog.csdn.net/weixin_48878618/article/details/135022116