Fun with OpenVINO 2: Trial run mask_rcnn_demo

Debug open_model_zoo/mask_rcnn_demo

Following the previous article, after the DEMO is compiled, let's continue to play with OpenVINO and try the model in open_model_zoo.

Suppose you want to debug a certain model in open_model_zoo (note that in the following command, use the model you want to use, I usually convert several at the same time, and finish random testing).

First, complete the Optimizer work, convert the model to get the IR file,

The command is as follows

python mo_tf.py --input_model 
E:/mask_rcnn_resnet50_atrous_coco_2018_01_28/frozen_inference_graph.pb
--tensorflow_use_custom_operations_config extensions/front/tf/mask_rcnn_support.json
--tensorflow_object_detection_api_pipeline_config E:/mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28/pipeline.config

Friends who like to debug with vscode can see the launch.json file below,

{
    // Use IntelliSense to learn about possible attributes.
    // Hover to view descriptions of existing attributes.
    // For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python: 当前文件",
            "type": "python",
            "request": "launch",
            "program": "${file}",
            "console": "integratedTerminal",
            "justMyCode": false,
            "args": [
                "--input_model","E:\\mask_rcnn_resnet50_atrous_coco_2018_01_28\\frozen_inference_graph.pb", 
                "--tensorflow_use_custom_operations_config","D:/devOpenVino/openvino_2020.3.194/deployment_tools/model_optimizer/extensions/front/tf/mask_rcnn_support.json",
                "--tensorflow_object_detection_api_pipeline_config","E:/mask_rcnn_inception_resnet_v2_atrous_coco_2018_01_28/pipeline.config"
            ]
        }
    ]
}

If you are like me and did not specify the output file name, all you get after the conversion is frozen_inference_graph.bin and frozen_inference_graph.xml files. Be careful to change to the corresponding model file name, otherwise you will get confused if there are too many.

Below, we start to test these models with C++ Demo in VS2019. These demos have been compiled in our last lecture " Playing with the compilation of OpenVINO_cpp samples ", now we can use them.

Add path one

If you use the debug version, add in the environment variable path path setting

C:\IntelSWTools\openvino_2020.3.194\deployment_tools\inference_engine\bin\intel64\Debug

At the same time, copy opencv_world430d.dll to this folder (if you don't want to copy, you can add the path yourself, anyway, let the program find the dll file)

If it is a release version, add

C:\IntelSWTools\openvino_2020.3.194\deployment_tools\inference_engine\bin\intel64\Release,

At the same time, copy opencv_world430.dll to this folder

In general, there are many dll files that are used by intel ineference_engine.

Add path two

There are also some paths that must be added,

C:\IntelSWTools\openvino_2020.3.194\deployment_tools\inference_engine\external\tbb\bin

C:\IntelSWTools\openvino_2020.3.194\deployment_tools\ngraph\lib

Commissioning

The name of the running project is mask_rcnn_demo.

For details, please refer to: https://docs.openvinotoolkit.org/latest/_demos_mask_rcnn_demo_README.html

I excerpt a part of the description as follows (note: this is the format under linux, the format used in the description below is the format in the windows system, there is a little difference in the use of commands)

./mask_rcnn_demo -h
InferenceEngine:
    API version ............ <version>
    Build .................. <number>
mask_rcnn_demo [OPTION]
Options:
    -h                                Print a usage message.
    -i "<path>"                       Required. Path to a .bmp image.
    -m "<path>"                       Required. Path to an .xml file with a trained model.
      -l "<absolute_path>"            Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.
          Or
      -c "<absolute_path>"            Required for GPU custom kernels. Absolute path to the .xml file with the kernels descriptions.
    -d "<device>"                     Optional. Specify the target device to infer on (the list of available devices is shown below). Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. The demo will look for a suitable plugin for a specified device (CPU by default)
    -detection_output_name "<string>" Optional. The name of detection output layer. Default value is "reshape_do_2d"
    -masks_name "<string>"            Optional. The name of masks layer. Default value is "masks"

View the help document: mask_rcnn_demo --h

C:\IntelSWTools\openvino_2020.3.194\deployment_tools\open_model_zoo\demos\dev\intel64\Debug>mask_rcnn_demo --h
InferenceEngine: 00007FFCC7C49BC8

mask_rcnn_demo [OPTION]
Options:

    -h                                Print a usage message.
    -i "<path>"                       Required. Path to a .bmp image.
    -m "<path>"                       Required. Path to an .xml file with a trained model.
      -l "<absolute_path>"            Required for CPU custom layers. Absolute path to a shared library with the kernels implementations.
          Or
      -c "<absolute_path>"            Required for GPU custom kernels. Absolute path to the .xml file with the kernels descriptions.
    -d "<device>"                     Optional. Specify the target device to infer on (the list of available devices is shown below). Use "-d HETERO:<comma-separated_devices_list>" format to specify HETERO plugin. The demo will look for a suitable plugin for a specified device (CPU by default)
    -detection_output_name "<string>" Optional. The name of detection output layer. Default value is "reshape_do_2d"
    -masks_name "<string>"            Optional. The name of masks layer. Default value is "masks"

Available target devices:  CPU  GNA

The picture here must be in bmp format.

How to enter the picture address? The official order is as follows,

./mask_rcnn_demo -i <path_to_image>/inputImage.bmp -m <path_to_model>/mask_rcnn_inception_resnet_v2_atrous_coco.xml

In fact, using the command line input method, OpenVINO is processed by a file called args_helper.hpp, and the code in one section is as follows,

/**
* @brief This function find -i/--images key in input args
*        It's necessary to process multiple values for single key
* @return files updated vector of verified input files
*/
inline void parseInputFilesArguments(std::vector<std::string> &files) {
    std::vector<std::string> args = gflags::GetArgvs();
    bool readArguments = false;
    for (size_t i = 0; i < args.size(); i++) {
        if (args.at(i) == "-i" || args.at(i) == "--images") {
            readArguments = true;
            continue;
        }
        if (!readArguments) {
            continue;
        }
        if (args.at(i).c_str()[0] == '-') {
            break;
        }
        readInputFilesArguments(files, args.at(i));
    }
}

In other words, the format of the input picture can be either of the following,

-i xyz.bmp or --images <I didn't study it carefully, here is the folder or xyz.bmp>

If you are debugging and running in VS2019, you can directly set the debugging parameters of the project to the above format, for example,

-i J:\BigData\default.bmp -m E:\mask_rcnn_resnet50_atrous_coco_2018_01_28\frozen_inference_graph.xml

I tried this DEBUG mode with a pure CPU, it was super slow! In Release mode, I randomly found a picture, and it took about a few seconds. I can't feel the speed up.

Of course, there are still a lot to ponder, and these details are not covered here for the time being, let's play it first.

 

 

Guess you like

Origin blog.csdn.net/tanmx219/article/details/107185266