Install Intel OpenVINO environment on Windows 10

foreword

Reminder: The document of this film is translated from the official document and my actual installation operation. Due to the limited level, you are welcome to leave a message and correct me if there is any inappropriateness.

This guide applies to Microsoft Windows* 10 64 位. Linux*See Linuxthe installation guide for operating system information and instructions .

Tip: You can use the Model Optimizer within OpenVINO™Deep Learning Workbench ( ) to get started quickly. is a user interface that lets you import models, analyze their performance and accuracy, visualize output, optimize and prepare models for deployment on various Intel® platforms.DL WorkbenchDL WorkbenchOpenVINO™

introduce

important hint:

  • Unless otherwise specified, all steps in this guide must be followed
  • In addition to downloading the installation package, dependencies must also be installed to complete all configurations

Complete all of the following steps to complete the installation:

  1. Install Intel® OpenVINO™the core tools package

  2. Install dependent tools and software

    • Microsoft Visual Studio* 2019 with MSBuild
    • CMake 3.14 or higher 64-bit
    • Python 3.6 - 3.8 64-bit

    Important note: As part of the installation process, please make sure to check Add installation path to environment variables Pythonwhen installing dependencies .Add Python 3.x to PATHPythonPATH

  3. set environment variables

  4. Configure Model Optimizer ( Model Optimizer)

  5. Optional installation steps :

    • Install windowsversion Intel® Graphicsdriver
    • Install drivers and software Intel® Movidius™ VPUfor Intel® Vision Accelerator Design( Vision Accelerator Design) usingintel
    • Update Windows*environment variables ( this step is required if it Pythonis not checked during installation)add Python to the path

Additionally, this guide also covers the following steps:

  • Code samples and getting started demos
  • Uninstall OpenVINO™Toolkit

Introduction to the Intel OpenVINO™ Toolkit

OpenVINO™Toolkit is a comprehensive toolkit for rapid development of applications and solutions that address a variety of tasks, including simulating human vision, automatic speech recognition, natural language processing, recommender systems, and more. Based on the latest generation of artificial neural networks, including convolutional neural networks ( CNN), recurrent networks, and attention-based networks, the toolkit scales computer vision and non-vision workloads across Intel® hardware to maximize performance. It accelerates applications with high-performance, artificial intelligence, and deep learning inference from edge hosts to cloud deployments. For more details, see the details page

OpenVINO distribution features

This guide focuses on the key benefits of the toolkit distribution for Windows* 10your operating system , including:OpenVINO

  • Enable CNNdeep learning inference based on edge computing
  • Supports heterogeneous execution across Intel® CPU, Intel® GPU, Intel® Neural Compute Stick 2and Intel® Vision Accelerator DesignwithIntel® Movidius™ VPU
  • Accelerate time to market with easy-to-use library of computer vision functions and pre-optimized kernels
  • Includes optimized calls to computer vision standards, including OpenCV*andOpenCL™

OpenVINO components

This installation includes the following components by default

components describe
Model Optimizer The tool imports, converts, and optimizes models trained in popular frameworks into a format usable by Intel tools, especially Inference Engine. Note: Popular frameworks include frameworks such as Caffe*, TensorFlow*, MXNet*, and ONNX*.
Inference Engine This is the engine that runs deep learning models and includes a set of dependent libraries that make it easy to integrate inference into applications.
OpenCV* OpenCV* community version compiled for Intel® hardware
Inference Engine Samples A set of simple console applications that demonstrate how to use Intel's Deep Learning Inference Engine in your applications. .
Demos A set of console applications that demonstrate how to use the inference engine in applications to solve specific use cases
Additional Tools A set of tools for models, including an accuracy checker utility, a guide to post-training optimization tools, a model downloader, and more
Documentation for Pre-Trained Models Documentation for pretrained models available in the Open Model Zoo repository

System Requirements

hardware requirements

  • 6th to 11th Generation Intel® Core™ Processors and Intel® Xeon® Processors
  • 3rd Generation Intel® Xeon® Scalable Processors (formerly codenamed Cooper Lake)
  • Intel® Xeon® Scalable Processors ( formerly SkylakeandCascade Lake
  • Intel Atom® Processor with support for Intel® Streaming SIMD Extensions 4.1 (Intel® SSE4.1)
  • Intel Pentium® Processor N4200/5, N3350/5 or N3450/5 with Intel® HD Graphics
  • Intel® Iris® Xe MAX Graphics
  • Intel® Neural Compute Stick 2( Neural Compute Stick 2)
  • Intel® Vision Accelerator Design with Intel® Movidius™ VPU ( Vision Accelerator Design)

Note: OpenVINO™ 2020.4no longer supported Intel® Movidius™ Neural Compute Stick.

About the processor:

  • Not all processors include processor graphics. For processor information, see Processor Specifications. detailed link
  • If you are using an Intel Xeon processor, you will need a chipset that supports processor graphics. For information about your chipset, see Chipset Specifications. details

operating system

Microsoft Windows* 10 64-bit

software requirements

  • Microsoft Visual Studio* with C++ 2019 or 2017 with MSBuild download link
  • CMake 3.10 or higher 64-bit

    Note: If you need to use Microsoft Visual Studio 2019, you need to install CMake 3.14. Download link

  • Python 3.6 - 3.8 64-bit download link

installation steps

Please make sure your hardware meets the above system requirements, software dependencies such as Microsoft Vusual Studioand CMakehave been installed

Install Intel® of OpenVINO™ toolkit core components

  1. If you have not downloaded the Intel® Distribution of OpenVINO™ toolkit, click here to download it . The default file name of the downloaded installation package is w_openvino_toolkit_p_<version>.exe.
    insert image description here

    Recommendation: Select the operating system, release version, software version, and installation method from top to bottom. The recommended options are: Windows, Web & Local 2021.3, , that is, select the version Localapplicable to Windowsthe platform2021.3本地安装包

  2. Double-click the installation package, the installation interface will pop up and let you choose the installation path, the default installation path is C:\Program Files (x86)\Intel\openvino_<version>, to simplify the operation, a shortcut will be created to point to this installation directory at the same time C:\Program Files (x86)\Intel\openvino_2021, if you choose another path, the shortcut will also be created

    Note: If the OpenVINO™ tool has been installed in your system before, this installation will use the existing installation path to install it again. If you need to install this new version in another path, you need to uninstall the old version

  3. Click nextto choose whether to allow the software to collect and send usage information, choose one at random, and then clicknext

  4. If you are missing external dependencies, you will see a warning screen. says your missing dependencies. No further action is required on your part at this time. After installing Intel® Distribution of OpenVINO™the toolkit core components, install missing dependencies. The screenshot below shows that you are missing two dependencies (shows three warnings, GPU in the middle can be ignored, the two missing dependencies should be Pythonand CMake):
    insert image description here

  5. Click next, and when the following screenshot appears, it means that the installation of the first part has been completed.
    insert image description here

install dependencies

As mentioned earlier, MS Visual Studio and CMake have been installed, skip this step and go to 配置环境变量the module

  1. install Microsoft Visual Studio* with C++ 2019 or 2017 with MSBuild download link
  2. Install.Download linkCMake 3.14 _

Configure environment variables

C:\Program Files (x86)\IntelNote: If you install OpenVINO in a non-default installation path, your current installation path will be replaced when executing the following configuration commands .

You must update several environment variables before compiling and running OpenVINOthe application. Open a command prompt and run setupvars.bata batch file to temporarily set your environment variables:
C:\Program Files (x86)\Intel\openvino_2021\bin\setupvars.bat
the execution results are as follows

C:\Users\aoto>C:\"Program Files (x86)"\Intel\openvino_2021\bin\setupvars.bat
Python 3.6.8
[setupvars.bat] OpenVINO environment initialized

IMPORTANT NOTE: It is not recommended to use Windows PowerShell* to run this configure script, the command line tool is recommended.

The environment variables have been set, and then configureModel Optimizer

configurationModel Optimizer

IMPORTANT NOTE: These steps are required. You must configure the model optimizer for at least one framework. If you do not complete the steps in this section, the Model Optimizer will fail.

Model Optimizer Description

Model Optimizer is OpenVINOa key component of the Intel® Distribution of Toolkit. It is not possible to perform inference on a trained model without running the model through the model optimizer. When you run a pretrained model through the model optimizer, your output is an intermediate representation (IR) of the network. The IR is a pair of files describing the entire model:

  • .xml: Description of the network topology
  • .bin: Binary data with weights and biases

The inference engine uses read, load and infer files across CPU, GPUor VPUhardware .通用 APIIR

Model Optimizer is a Pythoncommand-line tool ( mo.py) based on C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer. Use this tool on models trained with popular deep learning frameworks such as Caffe, TensorFlow, , MXNetand ONNX, to convert them into an optimized IRformat that can be used by inference engines.

This section explains how to use scripts to configure the model optimizer for all supported frameworks or for a single framework at the same time. If you would like to manually configure the Model Optimizer instead of using a script, see the Using the Manual Configuration Process section on the Configuring the Model Optimizer page.

For more information on the Model Optimizer, see the Model Optimizer Developer's Guide.

Model Optimizer configuration steps

You can configure the model optimizer for all supported frameworks at once or for one framework at a time. Choose the option that best suits your needs. If you see error messages, make sure you have all dependencies installed.

Important: Internet access is required to successfully perform the following steps. If you can only access the Internet through a proxy server, make sure it is properly configured in your environment.

Please note:

  • OpenVINOIf you want to use from another installed version of that you have installed 模型优化器, openvino_2021replace with openvino_<version>, where <version>is the desired version.
  • If you are OpenVINOinstalling to a non-default installation directory, C:\Program Files (x86)\Intelreplace with the directory where you installed the software.

Please use the command line interface to execute the following steps to ensure that you can see the error message when the error is reported again:

  • option one
    1. Open the command line (cmd.exe)
    2. Enter the script directory
      cd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\install_prerequisites
    3. execute script
      install_prerequisites.bat
  • option two
    • open command line
    • Enter the script directorycd C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\model_optimizer\install_prerequisites
    • Run different configuration scripts for different frameworks, and can run multiple times (different scripts)
      • Caffe framework
        install_prerequisites_caffe.bat
      • TensorFlow 1.x
        install_prerequisites_tf.bat
      • TensorFlow 2.x
        install_prerequisites_tf2.bat
      • MXNet
        install_prerequisites_mxnet.bat
      • ONNX
        install_prerequisites_onnx.bat
      • left
        install_prerequisites_kaldi.bat

You can choose one of the above two installation options. It is recommended to use option one, which is convenient and quick with less execution times.

OpenVINO workflow and demo trial run

Introduction to OpenVINO Components

The toolkit consists of three main components:

  • Model Optimizer(Model Optimizer): Optimizes the model for Intel architecture and converts the model into a format compatible with the inference engine. This format is called an intermediate representation (IR).
  • Intermediate Representation(IR for short): Model optimizer output. The model is converted to a format optimized for Intel architecture, which can be consumed by the inference engine.
  • Inference Engine: A software library that runs inference against an IR (optimized model) to generate inference results.

Additionally, demo scripts, code samples, and demo applications are provided to help you get up and running with the toolkit:

  • Demo Scripts - Batch scripts to run inference pipelines that automate workflow steps to show different scenarios.
  • Code sample - shows how to:
    • Use specific OpenVINO functionality in your application.
    • Perform specific tasks such as loading models, running inference, querying specific device capabilities, etc.
  • Demo Applications - Console applications that provide powerful application templates to help you implement specific deep learning scenarios. These applications involve increasingly complex processing pipelines that collect analytics data from multiple models running inference simultaneously, such as detecting a person in a video stream, and detecting a person's physical attributes such as age, gender, and emotional state .

OpenVINO workflow

The simplified OpenVINO workflow is:

  1. Obtain pre-trained models that can perform inference tasks such as pedestrian detection, face detection, vehicle detection, license plate recognition, and head pose
  2. Run the pre-trained model, pass Model Optimizer, convert the model into an intermediate form ( Intermediate Representation, for short IR), including a pair of files and files IRas the input of the inference engine.xml.bin
  3. Use the inference engine in the application to run inference APIon IR(optimized models) and output inference results. The application can be a OpenVINOsample, or your own application.

run demo

OpenVINOThe built-in Run Demo script is located here <INSTALL_DIR>\deployment_tools\demoand can serve as OpenVINOa simple example of a workflow to understand. These scripts automate workflow processes and demonstrate inference pipelines for different scenarios. The main content of the demonstration is:

  • Compile multiple instance files from OpenVINOcomponent built-in files
  • Download the pretrained model
  • Execute the steps and display the results in the console

The example scripts can be run on any device that meets the conditions. Inference is used by default , and other inference devices CPUcan be specified using parameters. For example, the general instructions are as follows:-dGPU
.<script_name> -d [CPU, GPU, MYRIAD, HDDL]

An example script for the inference pipeline,
located C:\Program Files (x86)\Intel\openvino_2021\deployment_tools\demo\demo_security_barrier_camera.pyin this document, uses vehicle recognition, where vehicle attributes build on each other to narrow down specific attributes.

The main content of the script:

  • Download three pretrained IR models
  • Build the Security Camera Demo Application
  • Run the program to demonstrate the inference process using the downloaded model and sample images

App main functions:

  • Identify objects labeled as vehicles
  • Use vehicle identification as input to a second model that recognizes specific vehicle attributes, including license plates. .
  • Use the license plate as input to a third model that recognizes the letters and numbers on the license plate

Execute the script:

# 进入示例脚本保存目录
cd C:\"Program Files (x86)"\Intel\openvino_2021\deployment_tools\demo\

# 运行示例脚本
.\demo_security_barrier_camera.bat

During the execution of the script, it will access the network to download the model and other dependencies. When the script is executed, the picture recognition window will pop up as shown below:
insert image description here

Guess you like

Origin blog.csdn.net/LJX_ahut/article/details/118761535