OpenVINO 2022.3实战五:NNCF实现图像分类模型 INT8 量化

OpenVINO 2022.3实战五:NNCF实现图像分类模型 INT8 量化

神经网络压缩框架(NNCF)提供了一种新的Python后训练量化API,旨在重复使用通常在模型源框架中可用的模型训练或验证代码,例如PyTorch_或TensorFlow_。该API是跨框架的,目前支持以下框架表示的模型:PyTorch、TensorFlow 2.x、ONNX和OpenVINO。

1 准备需要量化的模型

import os
import re
import torch
import torch.nn as nn
import torchvision
import subprocess
from pathlib import Path
from typing import List, Optional

import numpy as np
import openvino.runtime as ov
import torch
from openvino.tools import mo
from openvino.tools.pot import save_model
from sklearn.metrics import accuracy_score
from torchvision import datasets
from torchvision import models
from torchvision import transforms
from tqdm import tqdm

import nncf

from SlimPytorch.quantization.mobilenet_v2 import MobileNetV2

# Set the data and model directories
DATA_DIR = '/home/liumin/data/hymenoptera/val'
MODEL_DIR = './weights'


def load_pretrain_model(model_dir):
    model = MobileNetV2('mobilenet_v2', classifier=True)
    num_ftrs = model.fc[1].in_features
    model.fc[1] = nn.Linear(num_ftrs, 2)
    model.load_state_dict(torch.load(model_dir, map_location='cpu'))
    return model

def load_val_data(data_dir):
    data_transform = transforms.Compose([
        transforms.Resize(224),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
    image_dataset = datasets.ImageFolder(data_dir, data_transform)
    dataload = torch.utils.data.DataLoader(image_dataset, batch_size=16, shuffle=False, num_workers=4)
    return dataload


model = load_pretrain_model(Path(MODEL_DIR) / 'mobilenet_v2_train.pt')
model.eval()


val_loader = load_val_data(DATA_DIR)

2 量化模型

def transform_fn(data_item):
    images, _ = data_item
    return images

calibration_dataset = nncf.Dataset(val_loader, transform_fn)
quantized_model = nncf.quantize(model, calibration_dataset)


ov_model = mo.convert_model(model.cpu(), input_shape=[-1, 3, 224, 224])
ov_quantized_model = mo.convert_model(quantized_model.cpu(), input_shape=[-1, 3, 224, 224])


fp32_ir_path = f"weights/mobilenet_v2_fp32.xml"
ov.serialize(ov_model, fp32_ir_path)
print(f"Save FP32 model: {fp32_ir_path}")
fp32_model_size = get_model_size(fp32_ir_path, verbose=True)

int8_ir_path = f"weights/mobilenet_v2_int8.xml"
ov.serialize(ov_quantized_model, int8_ir_path)
print(f"Save INT8 model: {int8_ir_path}")
int8_model_size = get_model_size(int8_ir_path, verbose=True)

3 比较原始模型和量化模型的准确性

def validate(model: ov.Model, val_loader: torch.utils.data.DataLoader) -> float:
    predictions = []
    references = []

    compiled_model = ov.compile_model(model)
    output = compiled_model.outputs[0]

    for images, target in tqdm(val_loader):
        pred = compiled_model(images)[output]
        predictions.append(np.argmax(pred, axis=1))
        references.append(target)

    predictions = np.concatenate(predictions, axis=0)
    references = np.concatenate(references, axis=0)
    return accuracy_score(predictions, references)


print("Validate OpenVINO FP32 model:")
fp32_top1 = validate(ov_model, val_loader)
print(f"Accuracy @ top1: {fp32_top1:.3f}")

print("Validate OpenVINO INT8 model:")
int8_top1 = validate(ov_quantized_model, val_loader)
print(f"Accuracy @ top1: {int8_top1:.3f}")

FP32:

Accuracy @ top1: 0.922

INT8:

Accuracy @ top1: 0.915

4 比较原始模型和量化模型的性能

FP32:

λ benchmark_app -m weights/mobilenet_v2_fp32.xml -d CPU -api async -t 15 -shape [1,3,224,224]
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2022.3.0-9052-9752fafe8eb-releases/2022/3
[ INFO ]
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2022.3.0-9052-9752fafe8eb-releases/2022/3
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 18.00 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Model inputs:
[ INFO ]     input_0 (node: input_0) : f32 / [...] / [?,3,224,224]
[ INFO ] Model outputs:
[ INFO ]     545 (node: 545) : f32 / [...] / [?,2]
[Step 5/11] Resizing model to match image sizes and given batch
[ INFO ] Model batch size: 1
[ INFO ] Reshaping model: 'input_0': [1,3,224,224]
[ INFO ] Reshape model took 3.00 ms
[Step 6/11] Configuring input of the model
[ INFO ] Model inputs:
[ INFO ]     input_0 (node: input_0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Model outputs:
[ INFO ]     545 (node: 545) : f32 / [...] / [1,2]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 101.00 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   NETWORK_NAME: torch_jit
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4
[ INFO ]   NUM_STREAMS: 4
[ INFO ]   AFFINITY: Affinity.NONE
[ INFO ]   INFERENCE_NUM_THREADS: 16
[ INFO ]   PERF_COUNT: False
[ INFO ]   INFERENCE_PRECISION_HINT: <Type: 'float32'>
[ INFO ]   PERFORMANCE_HINT: PerformanceMode.THROUGHPUT
[ INFO ]   PERFORMANCE_HINT_NUM_REQUESTS: 0
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given for input 'input_0'!. This input will be filled with random values!
[ INFO ] Fill input 'input_0' with random values
[Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 15000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 10.24 ms
[Step 11/11] Dumping statistics report
[ INFO ] Count:            13180 iterations
[ INFO ] Duration:         15009.73 ms
[ INFO ] Latency:
[ INFO ]    Median:        4.40 ms
[ INFO ]    Average:       4.49 ms
[ INFO ]    Min:           3.55 ms
[ INFO ]    Max:           10.87 ms
[ INFO ] Throughput:   878.10 FPS

INT8:

λ benchmark_app -m weights\mobilenet_v2_int8.xml -d CPU -api async -t 15 -shape [1,3,224,224]
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2022.3.0-9052-9752fafe8eb-releases/2022/3
[ INFO ]
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2022.3.0-9052-9752fafe8eb-releases/2022/3
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 48.00 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Model inputs:
[ INFO ]     input_0 (node: input_0) : f32 / [...] / [?,3,224,224]
[ INFO ] Model outputs:
[ INFO ]     1440 (node: 1440) : f32 / [...] / [?,2]
[Step 5/11] Resizing model to match image sizes and given batch
[ INFO ] Model batch size: 1
[ INFO ] Reshaping model: 'input_0': [1,3,224,224]
[ INFO ] Reshape model took 11.00 ms
[Step 6/11] Configuring input of the model
[ INFO ] Model inputs:
[ INFO ]     input_0 (node: input_0) : u8 / [N,C,H,W] / [1,3,224,224]
[ INFO ] Model outputs:
[ INFO ]     1440 (node: 1440) : f32 / [...] / [1,2]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 362.12 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ]   NETWORK_NAME: torch_jit
[ INFO ]   OPTIMAL_NUMBER_OF_INFER_REQUESTS: 8
[ INFO ]   NUM_STREAMS: 8
[ INFO ]   AFFINITY: Affinity.NONE
[ INFO ]   INFERENCE_NUM_THREADS: 16
[ INFO ]   PERF_COUNT: False
[ INFO ]   INFERENCE_PRECISION_HINT: <Type: 'float32'>
[ INFO ]   PERFORMANCE_HINT: PerformanceMode.THROUGHPUT
[ INFO ]   PERFORMANCE_HINT_NUM_REQUESTS: 0
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given for input 'input_0'!. This input will be filled with random values!
[ INFO ] Fill input 'input_0' with random values
[Step 10/11] Measuring performance (Start inference asynchronously, 8 inference requests, limits: 15000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 7.76 ms
[Step 11/11] Dumping statistics report
[ INFO ] Count:            19008 iterations
[ INFO ] Duration:         15010.91 ms
[ INFO ] Latency:
[ INFO ]    Median:        6.10 ms
[ INFO ]    Average:       6.29 ms
[ INFO ]    Min:           4.89 ms
[ INFO ]    Max:           27.49 ms
[ INFO ] Throughput:   1266.28 FPS

猜你喜欢

转载自blog.csdn.net/shanglianlm/article/details/130891285