openvino deployment yolov5 v6 process record

introduction

This article mainly wants to summarize some conversion processes of openvino for yolov5, as well as related demos of openvino and some points that need to be noticed.

Introduction to openvino

Regarding openvino, some people say that it is the fastest model acceleration kit based on the CPU. This is of course obvious, because in my impression, apart from Intel itself, no company will do this kind of thing, emmm. . . Compared with the traditional cv pipeline, openvino has the following advantages:

However, the object of this comparison is a bit old. This is just to show that openvino will optimize many calls in the traditional computer vision algorithm implemented in OpenCV, combined with many things in its own suite, to achieve the goal. In addition, the picture below is 1+1>2more It explains the optimization made by openvino, that is, the green box - model optimizer, there is further analysis on it under the official website, and it will not be quoted here.

In fact, if you want to know more about the causes and consequences of openvino, you can sign up for the openvino primary exam. There are about 12 tests, 10 questions each time. If you are doing video-related business, the questions are relatively simple, common sense questions, as for openvino Personally, I felt nothing after reading the first PPT, so I went to the official website to go through it. In addition, some questions can be found from the multiple choice questions of the OpenVINO primary certification course . It takes about 1/2 hour to be able to I got the certificate, but I participated in this joint activity, and found that it is not as good as the NetEase cloud certificate, and I didn't see the anti-counterfeiting, emmm. . .

openvino installation

Here it is recommended to directly use a git open source project recommended by the official openvino_notebooks. Various installation methods have been mentioned in README.md, as follows:
insert image description here
The link is: https://github.com/openvinotoolkit/openvino_notebooks/blob/main/README.md# -installation-guide

You can choose to download and install from the installation package. The method I use here is based on the docker image.

openvino in dockerhub has open sourced the recently updated docker image. Here I choose ubuntu 20 directly according to the current system:
insert image description here
After pulling it down, because the system has an nvidia card inserted, there is no need to specify anything. You can choose the CPU to start as it prompts. :

docker run -it --rm --net=host --name openvino openvino/ubuntu20_dev:latest 

Here you only need to add the net parameter, and change the default bridge mode of docker to host, which is convenient for yolov5 to visualize netron later, and can enter outside the host.

So far, the environment has been deployed, and the following is the environment test.

openvino mirror test

First of all, this mirror exists 5.49GB, but there are still many things in it. After updating apt, install vim and other necessary plug-ins. Check and find that the openvino system environment matched by the pip package is: Here you will find a problem, the opencv version provided by the mirror has not been
insert image description here
passed cmakeCompilation, but direct pip install, it is very simple to identify this problem, enter the python command line:

>>> import cv2
>>> print(cv2.getBuildInformation())

Select a part of the output data as:

>>> import cv2
>>> print(cv2.getBuildInformation())

General configuration for OpenCV 4.5.5 =====================================
  Version control:               unknown

  Platform:
    Host:                        Linux 5.13.0-1025-azure x86_64
    CMake:                       3.22.5
    CMake generator:             Unix Makefiles
    CMake build tool:            /bin/gmake
    Configuration:               Release

  C/C++:
    Built as dynamic libs?:      NO
    C++ standard:                11
    C++ Compiler:                /usr/lib/ccache/compilers/c++  (ver 10.2.1)
    C++ flags (Release):         -Wl,-strip-all   -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -O3 -DNDEBUG  -DNDEBUG
    C++ flags (Debug):           -Wl,-strip-all   -fsigned-char -W -Wall -Wreturn-type -Wnon-virtual-dtor -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wundef -Winit-self -Wpointer-arith -Wshadow -Wsign-promo -Wuninitialized -Wsuggest-override -Wno-delete-non-virtual-dtor -Wno-comment -Wimplicit-fallthrough=3 -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -fvisibility-inlines-hidden -g  -O0 -DDEBUG -D_DEBUG
    C Compiler:                  /usr/lib/ccache/compilers/cc
    C flags (Release):           -Wl,-strip-all   -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -O3 -DNDEBUG  -DNDEBUG
    C flags (Debug):             -Wl,-strip-all   -fsigned-char -W -Wall -Wreturn-type -Waddress -Wsequence-point -Wformat -Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wstrict-prototypes -Wundef -Winit-self -Wpointer-arith -Wshadow -Wuninitialized -Wno-comment -Wno-strict-overflow -fdiagnostics-show-option -Wno-long-long -pthread -fomit-frame-pointer -ffunction-sections -fdata-sections  -msse -msse2 -msse3 -fvisibility=hidden -g  -O0 -DDEBUG -D_DEBUG
    Linker flags (Release):      -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -L/root/ffmpeg_build/lib  -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined
    Linker flags (Debug):        -Wl,--exclude-libs,libippicv.a -Wl,--exclude-libs,libippiw.a -L/root/ffmpeg_build/lib  -Wl,--gc-sections -Wl,--as-needed -Wl,--no-undefined
    ccache:                      YES
    Precompiled headers:         NO
    Extra dependencies:          /lib64/libopenblas.so Qt5::Core Qt5::Gui Qt5::Widgets Qt5::Test Qt5::Concurrent /usr/local/lib/libpng.so /lib64/libz.so dl m pthread rt
    3rdparty dependencies:       libprotobuf ade ittnotify libjpeg-turbo libwebp libtiff libopenjp2 IlmImf quirc ippiw ippicv

  OpenCV modules:
    To be built:                 calib3d core dnn features2d flann gapi highgui imgcodecs imgproc ml objdetect photo python3 stitching video videoio
    Disabled:                    world
    Disabled by dependency:      -
    Unavailable:                 java python2 ts
    Applications:                -
    Documentation:               NO
    Non-free algorithms:         NO

  Other third-party libraries:
    Intel IPP:                   2020.0.0 Gold [2020.0.0]
           at:                   /io/_skbuild/linux-x86_64-3.6/cmake-build/3rdparty/ippicv/ippicv_lnx/icv
    Intel IPP IW:                sources (2020.0.0)
              at:                /io/_skbuild/linux-x86_64-3.6/cmake-build/3rdparty/ippicv/ippicv_lnx/iw
    VA:                          NO
    Lapack:                      YES (/lib64/libopenblas.so)
    Eigen:                       NO
    Custom HAL:                  NO
    Protobuf:                    build (3.19.1)

  OpenCL:                        YES (no extra features)
    Include path:                /io/opencv/3rdparty/include/opencl/1.2
    Link libraries:              Dynamic load

  Python 3:
    Interpreter:                 /opt/python/cp36-cp36m/bin/python3.6 (ver 3.6.15)
    Libraries:                   libpython3.6m.a (ver 3.6.15)
    numpy:                       /opt/python/cp36-cp36m/lib/python3.6/site-packages/numpy/core/include (ver 1.13.3)
    install path:                python/cv2/python-3

  Python (for build):            /bin/python2.7

  Java:
    ant:                         NO
    JNI:                         NO
    Java wrappers:               NO
    Java tests:                  NO

  Install to:                    /io/_skbuild/linux-x86_64-3.6/cmake-install
-----------------------------------------------------------------

Deleted the compilation data based on video request headers, ffmpeg and other plug-ins. It is obvious that Intel did not use cmake when doing this environment. It may be that the dockerfile omits the compilation process. In order to speed up the time, then if you use this version RUNto Running cv2.dnn.readNet will report an error, because opencv does not add such a thing, from the code point of view, you can use IECoreinstead:

import cv2 as cv

net = cv.dnn.readNet('face-detection-adas-0001.xml','tection-adas-0001.bin')
# Specify target device (CPU)
net.setPreferableTarget(cv.dnn.DNN_TARGET_CPU)

"""替换readNet为read_network"""
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network('face-detection-adas-0001.xml', 'face-detection-adas-0001.bin')

Running the first method, the error is reported as:

Traceback (most recent call last):
File "facedetection.py", line 16, in
'pruned_mobilenet_reduced_ssd_shared_weights/dldt/face-detection-adas-0001.xml')
cv2.error: OpenCV(4.0.0) /io/opencv/modules/dnn/src/dnn.cpp:2538: error: (-2:Unspecified error) Build OpenCV with Inference Engine to enable loading models from Model Optimizer. in function 'readFromModelOptimizer

However, after I tested the second type, I found that IECoreit just completed the initialization, and the following writing methods are very different from opencv. For details, you can see the comparison between the OpenCV version and the OpenVINO version of the Python development deep learning reasoning program , although the time is similar.

If you don’t want to change the code completely, you can choose the second method except the code, uninstall the opencv directly installed by pip, and install it directly. opencv-python-inference-engineFor this package, I also found it when I read the issue of openvino. I don’t know except for pipelinethe improvement . , and added something, but cv.dnn.readNetthere will be no problem running again.
insert image description here
The two packages here opencv-pythonand opencv-python-inference-engineare installed later for me, and these two packages are incompatible, and the opencv-pythonpriority is higher than the latter, because the pip matching package is based on the minimum prefix rule, and the default is 4.5.5.64 as shown in the above picture just after entering the opencv environment. openvino 2022 needs 4.5+ opencv, 4.6 is not acceptable, I replaced it by default because I ran the yolov5 project, and the later yolo adaptation demo does not need the inference-engine package, it does not matter whether it is uninstalled or not.

The third way is to uninstall opencv according to the official documentation, pull down the opencv source cmake and add -DWITH_INF_ENGINE=ON, the cmake data here can be compiled if the correct data appears, the following information is quoted from issue 94:

https://github.com/openvinotoolkit/open_model_zoo/issues/94

-- Detected InferenceEngine: cmake package
...
--     Inference Engine:            YES (2019010000 / 1.6.0)
--                 libs:            /opt/intel/openvino_2019.1.094/deployment_tools/inference_engine/lib/intel64/libinference_engine.so
--             includes:            /opt/intel/openvino_2019.1.094/deployment_tools/inference_engine/include

Regarding the compilation of opencv, you can read my previous notes, so I won’t go into details here:

The whole process of compiling opencv with CPU under ubuntu18.04

Openvino adapts to yolov5

The version selected here is openvino 2022, and yolov5 is version v6.0. The main references are the following two GitHub projects:

https://github.com/violet17/yolov5_demo

https://github.com/Chen-MingChang/pytorch_YOLO_OpenVINO_demo

First follow the steps to build the yolo environment.

yolov5 environment construction

Run the following command in a Linux terminal:

$ git clone https://github.com/ultralytics/yolov5

If not specified, the 6.1 version of yolov5 is pulled by default, which is what I am currently adapting to. After pulling down the code, install the appropriate package:

$ cd yolov5/
$ pip install -r requirements.txt
$ pip install onnx

So far, there are three tags in the YOLOv5 repository. YOLOv5 has different backbones, including YOLOv5s, YOLOv5m, YOLOv5l and YOLOv5x. Here we use YOLOv5s in tag v3.0 for description. Run the following command to download yolov5s.pt:

$ wget https://github.com/ultralytics/yolov5/releases/download/v6.0/yolov5s.pt 

torch weight conversion onnx file

The YOLOv5 repository provides a script models/export.py to export Pytorch weights with extension *.pt to ONNX weights with extension *.onnx. Run the file, as long as there is no error, the conversion is successful:

$ python models/export.py  --weights yolov5-v3/yolov5s.pt  --img 640 --batch 1

Then you can see that there are two files under the folder, one yolov5s.onnxand one yolov5s.torchscript.

Convert ONNX files to IR files

Here you need to download the netron package to visualize the network:

pip install netron

When we use the model optimizer to convert the YOLOv5 model, we need to specify the output node of the IR.

For example, there are 3 output nodes in yolov5s.onnx obtained in the previous step. We can use Netron to visualize yolov5s.onnx. We then find the output node by searching for the keyword "Transpose" in Netron. After that, we can find the convolution node marked as an ellipse, as shown in the image below. After double-clicking on the convolution node, we can see its name "Conv_198".

insert image description here
Similarly, we can find two other output nodes "Conv_232" and "Conv_266".

Run the following command to generate the IR of the YOLOv5 model:

$ python3 /opt/intel/openvino_2021.1.110/deployment_tools/model_optimizer/mo.py  --input_model yolov5s.onnx -s 255 --reverse_input_channels --output Conv_198,Conv_232,Conv_266

The log is:

nput_channels --output Conv_198,Conv_232,Conv_266
Model Optimizer arguments:
Common parameters:
        - Path to the Input Model:      /home/Download/yolov5/yolov5s.onnx
        - Path for generated IR:        /home/Download/yolov5/.
        - IR output name:       yolov5s
        - Log level:    ERROR
        - Batch:        Not specified, inherited from the model
        - Input layers:         Not specified, inherited from the model
        - Output layers:        Conv_198,Conv_232,Conv_266
        - Input shapes:         Not specified, inherited from the model
        - Source layout:        Not specified
        - Target layout:        Not specified
        - Layout:       Not specified
        - Mean values:  Not specified
        - Scale values:         Not specified
        - Scale factor:         255.0
        - Precision of IR:      FP32
        - Enable fusing:        True
        - User transformations:         Not specified
        - Reverse input channels:       True
        - Enable IR generation for fixed input shape:   False
        - Use the transformations config file:  None
Advanced parameters:
        - Force the usage of legacy Frontend of Model Optimizer for model conversion into IR:   False
        - Force the usage of new Frontend of Model Optimizer for model conversion into IR:      False
OpenVINO runtime found in:      /opt/intel/openvino/python/python3.8/openvino
OpenVINO runtime version:       2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version:        2022.1.0-7019-cdb9bec7210-releases/2022/1
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /home/Download/yolov5/yolov5s.xml
[ SUCCESS ] BIN file: /home/Download/yolov5/yolov5s.bin
[ SUCCESS ] Total execution time: 0.52 seconds.
[ SUCCESS ] Memory consumed: 127 MB.
It's been a while, check for a new version of Intel(R) Distribution of OpenVINO(TM) toolkit here https://software.intel.com/content/www/us/en/develop/tools/openvino-toolkit/download.html?cid=other&source=prod&campid=ww_2022_bu_IOTG_OpenVINO-2022-1&content=upg_all&medium=organic or on the GitHub*
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai

Then the current directory can see three files as:
insert image description here

So far, the conversion is successful. As for the benchmark, and the quantization of the model into FP 16, etc., you can refer to the official website tutorial for these operations. I just record the study notes here, which may be used later.

Guess you like

Origin blog.csdn.net/submarineas/article/details/125722510