【pytorch 模型量化方法总结】

文章目录


后端:x86、arm移动嵌入式平台;

对应参数:‘fbgemm’ 、 ‘qnnpack’

命令行:torch.quantization.get_default_qconfig(‘fbgemm’)

1.动态量化代码示例:

import torch
 
# define a floating point model
class M(torch.nn.Module):
    def __init__(self):
        super(M, self).__init__()
        self.fc = torch.nn.Linear(4, 4)
 
    def forward(self, x):
        x = self.fc(x)
        return x
 
# create a model instance
model_fp32 = M()
# create a quantized model instance
model_int8 = torch.quantization.quantize_dynamic(
    model_fp32,  # the original model
    {
    
    torch.nn.Linear},  # a set of layers to dynamically quantize
    dtype=torch.qint8)  # the target dtype for quantized weights
 
# run the model
input_fp32 = torch.randn(4, 4, 4, 4)
res = model_int8(input_fp32)

适用于 Linear、LSTM、RNN等层;

权重直接量化;bias和激活函数 在推理过程中动态量化;

2.静态量化示例:

import torch
 
# define a floating point model where some layers could be statically quantized
class M(torch.nn.Module):
    def __init__(self):
        super(M, self).__init__()
        # QuantStub converts tensors from floating point to quantized
        self.quant = torch.quantization.QuantStub()
        self.conv = torch.nn.Conv2d(1, 1, 1)
        self.relu = torch.nn.ReLU()
        # DeQuantStub converts tensors from quantized to floating point
        self.dequant = torch.quantization.DeQuantStub()
 
    def forward(self, x):
        # manually specify where tensors will be converted from floating
        # point to quantized in the quantized model
        x = self.quant(x)
        x = self.conv(x)
        x = self.relu(x)
        # manually specify where tensors will be converted from quantized
        # to floating point in the quantized model
        x = self.dequant(x)
        return x
 
# create a model instance
model_fp32 = M()
 
# model must be set to eval mode for static quantization logic to work
model_fp32.eval()
 
# attach a global qconfig, which contains information about what kind
# of observers to attach. Use 'fbgemm' for server inference and
# 'qnnpack' for mobile inference. Other quantization configurations such
# as selecting symmetric or assymetric quantization and MinMax or L2Norm
# calibration techniques can be specified here.
model_fp32.qconfig = torch.quantization.get_default_qconfig('fbgemm')
 
# Fuse the activations to preceding layers, where applicable.
# This needs to be done manually depending on the model architecture.
# Common fusions include `conv + relu` and `conv + batchnorm + relu`
model_fp32_fused = torch.quantization.fuse_modules(model_fp32, [['conv', 'relu']])
 
# Prepare the model for static quantization. This inserts observers in
# the model that will observe activation tensors during calibration.
model_fp32_prepared = torch.quantization.prepare(model_fp32_fused)
 
# calibrate the prepared model to determine quantization parameters for activations
# in a real world setting, the calibration would be done with a representative dataset
input_fp32 = torch.randn(4, 1, 4, 4)
model_fp32_prepared(input_fp32)
 
# Convert the observed model to a quantized model. This does several things:
# quantizes the weights, computes and stores the scale and bias value to be
# used with each activation tensor, and replaces key operators with quantized
# implementations.
model_int8 = torch.quantization.convert(model_fp32_prepared)
 
# run the model, relevant calculations will happen in int8
res = model_int8(input_fp32)

1.静态量化需要在模型起始和结束位置定义quant和dequant接口;

2.配置好后端

3.融合的层声明;一般是conv+relu;或者是conv+bn+relu;

3.准备量化

4.配置量化的推理数据集(一般对应于你的训练任务)

5.量化模型转换;此处转换为int8精度;

6.验证量化后模型;

订阅代码可关注:https://github.com/oyjGithub

猜你喜欢

转载自blog.csdn.net/weixin_42483745/article/details/125700925