Accelerate HALCON AI inference using Intel discrete graphics

Author: Zhang Jiaji, MVTec pre-sales engineer; Zhang Jing, Intel AI Developer Marketing Manager

1.1  This isHALCON

MVTec HALCON is a comprehensive machine vision standard software used worldwide. It has a dedicated integrated development environment (HDevelop) dedicated to developing image processing solutions. With MVTec HALCON you can:

  • Benefit from flexible software architecture
  • Accelerate the development of all feasible machine vision applications 
  • Guaranteed quick entry to market
  • Continuously reduce costs 

As a comprehensive toolbox, HALCON covers the entire workflow of machine vision applications. At its core is a flexible and powerful image processing library with more than 2100 operators. HALCON is suitable for all industries and offers excellent performance for image processing. Official website link: https://www.mvtec.com/

Picture quoted from: https://www.mvtec.com/cn/products/HALCON/why-HALCON/compatibility

1.2   What isOpenVINOTM tool suite

OpenVINO™ Tool Suite: A tool that can easily achieve "Develop once, deploy anywhere" , an open source tool suite for AI model optimization and deployment.

  • Improve deep learning performance for computer vision, automatic speech recognition, natural language processing, and other common tasks.
  • Use models trained on popular frameworks like TensorFlow, PyTorch, PaddlePaddle, and more.
  • Reduce resource requirements and deploy efficiently across a range of Intel® platforms from edge to cloud

1.3 AnsoHALCON Japanese OpenVINO

Starting from version 21.05, HALCON supports OpenVINO through the new HALCON AI Accelerator Interface (AI²) ™ tool suite, thereby supporting AI models to achieve inference computing acceleration on Intel hardware devices.

The current HALCON AI model supports Intel hardware devices, as shown in the table below

To use the HALCON AI accelerator interface to accelerate AI inference calculations on Intel hardware devices, you only need to install HALCON and OpenVINO™ once, and then Just write a HALCON AI inference program.

1.3.1  ANSOHALCON

Official website registration

Log in to the HALCON software download page on the MVTec official website (the latest version of HALCON currently is23.11 Progress)< /span>

https://www.mvtec.com/downloads/halcon, if you have not registered an MVTec user account, you need to register a personal or corporate account first. (Please note that you need to use a company email to register here. Private email addresses such as qq email and 163 email will fail to register). You can also check the following web page for the latest version update information:HALCON 23.11 New Features: MVTec Software

Download and unzip

Download the full version of the installation package from the official website (a login account is required), Download HALCON: MVTec Software. You can select the product version and operating system. Here we take the latest version 23.11 progress of the Windows platform as an example. Clicking on the link in the picture will automatically start downloading, and you can use tools to accelerate it.

After downloading and decompressing, open the corresponding folder, click the som.exe file, and start SOM (Software Manager).

Installation settings

SOM will use the default browser to open the installation interface. If no optional installation items appear after opening the interface, it is recommended to restart the computer and open som.exe again.

You can click the "Language" button to switch the interface language, and click the "Environment" button to modify some settings, such as program and data installation paths, warehouse addresses, etc. It is generally best to use the default values.

Then select the "Available" page, find the installation package, and click the "Install" button. The upper button is to install for the current user, and the lower button is to install for all users (system administrator rights are required). Generally, click the upper button.

If the device has enough space (above 15G), it is recommended to select all on the right and install them all; click and wait for the installation to complete.

Additionallicense statement

The operation of HALCON software also requires the corresponding license encryption file. You can purchase the official version from MVTec official or apply for a trial version.

Then, you can load the license file directly in the SOM interface. Click the red button in the picture above to open the interface below to install and manage the license file. Simply drag the license file in.

Finally, find the HALCON integrated development environment HDevelop software icon on the Windows desktop, and you can use HALCON normally.

1.3.2  AnsoOpenVINO 2021.4 LTS

ReceiveOpenVINO official mail下载并安设OpenVINO 2021.4.2,See below.

After the installation is complete, please add the path of the OpenVINO™ runtime library to the Windows environment variablepathMedium.

Step 1, run:

C:\Program Files (x86)\Intel\openvino_2021.4.752\bin\setupvars.bat

path

Get the path to the OpenVINO™ runtime library, as shown in the figure below:

Second step, add the path of the OpenVINO™ runtime library to theenvironment variablepath< /span>, as shown below:

At this point, download and install OpenVINO™, and then add the path of the OpenVINO™ runtime library to the Windows environment variable path.

Note: If the CPU used in your computer includes an integrated graphics card, please disable the integrated graphics card in the BIOS.

1.4  Edited copyHALCON AI reasoning sequence

1.4.1 HALCON AI inference program workflow

Regarding the workflow of the HALCON AI inference program, take HALCON's deep learning image classification as an example. The program code is the development language of the HALCON integrated development environment HDevelop.

1. Read the trained deep learning model and preprocessing parameters:

* Read in the model and Param.

read_dl_model (RetrainedModelFileName, DLModelHandle)

read_dict (PreprocessParamFileName, [], [], DLPreprocessParam)

2. Import inference images and generate deep learning samples:

* Read the images of the batch.

read_image (ImageBatch, Batch)

* Generate the DLSampleBatch.

gen_dl_samples_from_images (ImageBatch, DLSampleB atch)

3. Preprocess deep learning samples to match the model:

* Preprocess the DLSampleBatch.

preprocess_dl_samples (DLSampleBatch, DLPreprocessParam)

4. Perform deep learning inference:

* Apply the DL model on the DLSampleBatch.

apply_dl_model (DLModelHandle, DLSampleBatch, [], DLResultBatch)

5. Process the result data:

get_dict_tuple (DLResult, 'bbox_length2', BboxLength2)

get_dict_tuple (DLResult, 'bbox_phi', BboxPhi)

get_dict_tuple (DLResult, 'bbox_class_id', BboxClasses)

6. Display the results:

dev_display (RectangleSelected)

dev_disp_text (TextResults, 'window', 'top', 'left', BboxColorsResults, 'box', 'false')  

1.4.2  HALCON AI accelerator interface (AI²)

MVTec’s OpenVINO™ tool suiteplug-in is based on the new HALCON AI Accelerator Interface (AI²). Through this common interface, customers can quickly and easily use supported AI accelerator hardware for inference in deep learning applications.

These special devices are not only widely used in embedded environments, but also increasingly appear in PC environments. The AI ​​accelerator interface abstracts deep learning models from specific hardware, making it particularly future-proof.

As a technology leader in machine vision software, MVTec's software enables new automation solutions in the industrial IoT environment by using modern technologies such as 3D vision, deep learning and embedded vision.

In addition to the plug-ins provided by MVTec, customer-specific AI accelerator hardware can also be integrated. Furthermore, not only typical deep learning applications can be accelerated by AI², but all “classic” machine vision methods that integrate deep learning capabilities, such as HALCON’s Deep OCR, can also benefit from it.

1.4.3  Use DLT tools for data annotation and training of deep learning models

DeepLearningTool (DLT) is a free tool launched by MVTec for deep learning annotation and training. With deep learning tools, you can easily label data with an intuitive user interface without any programming knowledge. This data can be seamlessly integrated into HALCON to perform deep learning-based object detection, classification, semantic segmentation, power segmentation, outlier detection, and Deep OCR.

The following is a video of our routine using DLT for model annotation and training:

Annotate and train instance segmentation models using DLT

1.4.4  HALCON AI reasoning example program based on OpenVINO

In this article, we use the official example program of deep learning object detection based on HALCON.

The HALCON sample code based onOpenVINO™ used in this article has been shared on the MVTec official website. The website address is: https://www.mvtec.com/cn/technologies/deep-learning/ai-accelerator-interface

--------------------------------------------------------------------------------------

After downloading, save the program to any path.

If the inference needs to load the retrained deep learning model and pre-training parameters, you need to use HALCON's development environment Hdevelop to first run the example program in the official path %HALCONEXAMPLES%/hdevelop/Deep-Learning/Detection/, so that you can complete the training and save the model:

  • dl_detection_with_orientation_workflow.hdev

After the training and testing programs are completed, the trained model (model_best.hdl) and image preprocessing parameters (DLPreParam.hdict) will be saved in the corresponding path. You can replace the files in the example program.

Open the downloaded sample program and the corresponding model and preprocessing parameters in the local path, as shown in the following code:

RetrainedModelFileName:='model_best.hdl'

PreprocessParamFileName:='DLPreParam.hdict'

The demonstration image used in the routine is the screws folder in the Halcon data set. If HALCON is installed correctly, it is in the HALCONEXAMPLES path and can be found directly using the following code.

list_image_files ('screws', 'default', [], ImageFiles)

Then run the example (or press F5), first you need to query the OpenVINO™ devices supported by HALCON:

* This example needs the HALCON AI²-interface for the Intel® Distribution of the OpenVINO™ Toolkit * and a installed version of the Intel® Distribution of the OpenVINO™ Toolkit.

query_available_dl_devices ('ai_accelerator_interface', 'openvino', DLDeviceHandlesOpenVINO)

After that, continue to execute the program, and all queried OpenVINO™ device information will be displayed in sequence on the visual interface, including the information needed for this article. Intel Arc A770 discrete graphics card, here we see that the supported inference precision is FP32 and FP16, as shown below.

Then, you need to selectOpenVINO™ device, which is currently supported by the HALCON AI² interfaceOpenVINO™< /span> device, the device serial number can be modified to select different device channels. OpenVINO ™ that the running device is "GPU", which is Intel's independent graphics card. If you want to choose otherOpenVINO™Devices include Intel CPU, GPU, HDDL and MYRIAD. When installing HALCON, only the CPU plug-in is built-in. You need to install additional OpenVINO™ tool suite to support GPU and other devices. For specific installation, please refer to Chapter 1.3. 2. Here we specify

* Choose a OpenVINO device

DLDeviceOpen :=DLDeviceHandlesOpenVINO[3]

set_dl_model_param (DLModelHandle, 'device', DLDeviceOpen)

The program here will perform inference optimization for the device and obtain an inference model that has been accelerated and optimized byOpenVINO™. If there are no additional settings, the default float32 is used for precision.

--------------------------------------------------------------------------------------

This routine does not use C# or C++ to jointly program and write the interface. It is all completed in HALCON. You need to adjust the window display in HDevelop according to the instructions in the routine. After confirming that the adjustment is completed, press F5 again, and the routine The loop runs to completion.

The obtained display interface and results are as shown in the figure below:

You can see in the picture that the algorithm accurately found the position and direction of the object on the background, and also marked the corresponding category. In the results display section, you can see the detection data results, such as the score, type, detailed coordinates and angles of each category. Colleagues can see the running speed of the algorithm after using OpenVINO acceleration in the upper right corner of the picture. The algorithm running time for each picture is about 15~19ms; it can basically meet the needs of high-beat generation.

In addition, in order to enhance the demonstration effect, a waiting delay is added after some image processing results, mainly for display.

For the workflow of inference, please refer to Chapter 1.4.1. While performing inference, you can open the task manager and observe the running status of the Intel independent graphics card. In the example, FP32 precision is used by default to accelerate inference. You can also switch to FP16 precision for comparison testing according to specific needs.

There is also obvious acceleration performance on Intel A380 graphics card

Accelerate Halcon AI model inference on Intel discrete graphics

1.5  瀻结

The MVTec HALCON AI Accelerator Interface (AI²) helps users of MVTec software products take full advantage of AI accelerator hardware compatible with theOpenVINO™ tool suite. As a result, deep learning inference times can be significantly reduced on Intel computing devices for critical workloads.

With the expanded range of supported hardware, users can now take full advantage of the performance of a wide range of Intel devices to accelerate deep learning applications, no longer limited to a few specific devices. At the same time, this integration works seamlessly and is not constrained by specific hardware details. Now you can perform inference on existing deep learning applications on devices supported by the OpenVINO™ tool suite simply by changing parameters.

Guess you like

Origin blog.csdn.net/gc5r8w07u/article/details/134941811