introduction
OpenVINO is Intel for its own hardware (Core family of six or more generations CPU, Xeon part series CPU, graphics card and a portion of FPGA, VPU, neural computing hardware such as rods, details of the point where the development of neural network model library reasoning acceleration) , you can use Python and C ++ programming development. It can be developed in two main ways:
1. Direct use of models for reasoning introduced OpenVINO acceleration.
2. Use OpenCV import trained model, set up the back-end reasoning and reasoning hardware, you can achieve accelerated reasoning.
net.setPreferableBackend(DNN_BACKEND_INFERENCE_ENGINE);
net.setPreferableTarget(DNN_TARGET_CPU);
This article will second way of setting up the environment, as well as the joint compilation process OpenCV and OpenVINO and related Notes record, we want to help.
Libraries and related software download installation instructions
1. OpenVINO Library Download and install
According to Intel's official website requires registration account, you can download after logging in, choose the latest version of my side, remember the serial number, you may want to use later.
Direct running after the download is complete, you need to extract the files to a folder, select a folder here any decompression can be performed.
After unpacking done directly into the installation interface, because the landlord did not VPU, VPU module installation here is not recommended to install in the default path, click Next.
Next will prompt the lack of any software or hardware, such as my side lacks Intel graphics card hardware, it does not matter, directly next to install.
Wait for the installation to complete.
OpenCV source code download
Choose a good OpenCV version download, to 4.0 or later, select the 4.1.2 version of the landlord here, no opencv_contrib library.
CMake installation and VS
CMake landlord choice here is 3.14, VS version may be 2015 and 2017, have been tested and available.
Compilation
打开CMake,选择OpenCV源码路径以及生成的路径,点击Configure,选择正确的编译平台,点击Finish,开始第一次Configure。(由于过程中会下载一些第三方库,下载速度可能会比较慢,可手动到此处下载,放在相应的文件夹中,这部分比较繁琐,就不细说)
勾选Build_opencv_world,进行第二次Configure。
Configure完成之后,点击Generate,等待生成完成。
生成完成之后,在生成路径下找到OpenCV.sln,右键选择相应版本的VS打开。找到opencv_world项目,右键属性,配置相关路径。
包含目录:
库目录:
附加依赖项,inference_engine.lib (Release模式)和inference_engined.lib (Debug模式)。
在C/C+±>预处理器->预处理定义器中添加:HAVE_INF_ENGINE。
找到源文件op_inf_engine.cpp,添加
#ifndef HAVE_INF_ENGINE
#define HAVE_INF_ENGINE
#endif // !HAVE_INF_ENGINE
找到头文件op_inf_engine.hpp,注释掉warning。
右键ALL_BUILD生成,等待一会儿。
生成完之后,找到INSTALL->仅用于项目->仅生成INSTALL。
到这边编译完成。
测试
Set environment variables.
Set in the project directory that contains the library catalog and additional dependencies.
Then start the test, the first is not an inference engine to accelerate, the inference time is about 20-23ms, slight fluctuations.
Then the inference engine inference time is about 8-9ms.
Incidentally, the acceleration module with CUDA posted reasoning time, between about 2 to 3ms. I can see the configuration of this blog .