[Model Compression] Distiller Learning-First Understanding

Distiller learning-first introduction

Introduction

  • Intel AILab's neural network compression framework, built on Pytorch

Install

Insert image description here

Compression method

  • Weight regularization method
  • Weight pruning method
  • Post-training quantization method
  • Quantization method during training
  • Conditional calculation
  • low quality decomposition method
  • knowledge distillation method

Insert image description here

general catalog

Insert image description here

Core code implementation

Insert image description here

Configuration files for all cases

Insert image description here

Example

  • Initialize the network
  • Evaluate the parameter importance of network models
  • Remove unimportant neurons
  • fine-tuning
  • Continue pruning and retraining

Insert image description here

Guess you like

Origin blog.csdn.net/qq_44653420/article/details/133465474