Model Distillation: How to Make Models Easier to Understand and Interpret


Author: Zen and the Art of Computer Programming

Model Distillation: How to Make Models Easier to Understand and Interpret

1 Overview

Model explainability (Explainable AI, XAI) is one of the hot areas in artificial intelligence research in recent years. Its purpose is to allow machine learning models to explain their own prediction results and decisions, thereby improving people's trust in the model. Distillation is a commonly used technique that can help us train more understandable models. This article describes how to use distillation techniques to improve model interpretability.

2. Technical principles and concepts


2.1 Explanation of basic concepts

Distillation is a

Guess you like

Origin blog.csdn.net/universsky2015/article/details/131490897