[Model Compression] Distiller Learning-First Understanding
Others
2023-10-04 18:32:00
views: null
Distiller learning-first introduction
Introduction
- Intel AILab's neural network compression framework, built on Pytorch
Install
Compression method
- Weight regularization method
- Weight pruning method
- Post-training quantization method
- Quantization method during training
- Conditional calculation
- low quality decomposition method
- knowledge distillation method
general catalog
Core code implementation
Configuration files for all cases
Example
- Initialize the network
- Evaluate the parameter importance of network models
- Remove unimportant neurons
- fine-tuning
- Continue pruning and retraining
Origin blog.csdn.net/qq_44653420/article/details/133465474