OpenCV DNN modules OpenVINO to accelerate CPU reasoning

introduction

OpenVINO is Intel for its own hardware (Core family of six or more generations CPU, Xeon part series CPU, graphics card and a portion of FPGA, VPU, neural computing hardware such as rods, details of the point where the development of neural network model library reasoning acceleration) , you can use Python and C ++ programming development. It can be developed in two main ways:
1. Direct use of models for reasoning introduced OpenVINO acceleration.
2. Use OpenCV import trained model, set up the back-end reasoning and reasoning hardware, you can achieve accelerated reasoning.

net.setPreferableBackend(DNN_BACKEND_INFERENCE_ENGINE);
net.setPreferableTarget(DNN_TARGET_CPU);

This article will second way of setting up the environment, as well as the joint compilation process OpenCV and OpenVINO and related Notes record, we want to help.

Libraries and related software download installation instructions

1. OpenVINO Library Download and install

According to Intel's official website requires registration account, you can download after logging in, choose the latest version of my side, remember the serial number, you may want to use later.
Figure 1. Select the version to download
Direct running after the download is complete, you need to extract the files to a folder, select a folder here any decompression can be performed.
Here Insert Picture Description
After unpacking done directly into the installation interface, because the landlord did not VPU, VPU module installation here is not recommended to install in the default path, click Next.
Here Insert Picture Description
Next will prompt the lack of any software or hardware, such as my side lacks Intel graphics card hardware, it does not matter, directly next to install.
Here Insert Picture Description
Wait for the installation to complete.

OpenCV source code download

Choose a good OpenCV version download, to 4.0 or later, select the 4.1.2 version of the landlord here, no opencv_contrib library.

CMake installation and VS

CMake landlord choice here is 3.14, VS version may be 2015 and 2017, have been tested and available.

Compilation

打开CMake,选择OpenCV源码路径以及生成的路径,点击Configure,选择正确的编译平台,点击Finish,开始第一次Configure。(由于过程中会下载一些第三方库,下载速度可能会比较慢,可手动到此处下载,放在相应的文件夹中,这部分比较繁琐,就不细说)
Here Insert Picture Description
勾选Build_opencv_world,进行第二次Configure。
Here Insert Picture Description
Configure完成之后,点击Generate,等待生成完成。Here Insert Picture Description
生成完成之后,在生成路径下找到OpenCV.sln,右键选择相应版本的VS打开。找到opencv_world项目,右键属性,配置相关路径。
Here Insert Picture Description
包含目录:
Here Insert Picture Description
库目录:
Here Insert Picture Description
附加依赖项,inference_engine.lib (Release模式)和inference_engined.lib (Debug模式)。
Here Insert Picture Description
在C/C+±>预处理器->预处理定义器中添加:HAVE_INF_ENGINE。
Here Insert Picture Description
找到源文件op_inf_engine.cpp,添加

#ifndef HAVE_INF_ENGINE
#define HAVE_INF_ENGINE
#endif // !HAVE_INF_ENGINE

Here Insert Picture Description
找到头文件op_inf_engine.hpp,注释掉warning。
Here Insert Picture Description
右键ALL_BUILD生成,等待一会儿。
Here Insert Picture Description
生成完之后,找到INSTALL->仅用于项目->仅生成INSTALL。
Here Insert Picture Description
到这边编译完成。

测试

Set environment variables. Here Insert Picture Description
Set in the project directory that contains the library catalog and additional dependencies.
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description
Then start the test, the first is not an inference engine to accelerate, the inference time is about 20-23ms, slight fluctuations.
Here Insert Picture Description
Then the inference engine inference time is about 8-9ms. Here Insert Picture Description
Incidentally, the acceleration module with CUDA posted reasoning time, between about 2 to 3ms. I can see the configuration of this blog .
Here Insert Picture Description

Released four original articles · won praise 10 · views 857

Guess you like

Origin blog.csdn.net/weixin_39928773/article/details/103756385