openvino deployment target detection

Installation Instructions

  • Ensure a successful installation OpenVINO, it is the best version later 2017R3. As used herein version 2018R4
  • Be sure to install tensorflow, this article tensorflow to compile the source code of tensorflow-gpu-1.10 (the latest version of the compiler tensorflow error)
  • ubuntu16.0.4 far OpenVINO does not support ubuntu18.0.4

Download ssdv2 version of the archive target detection training

This article addresses the model download ssd download link , download the rest of the model has not been tested in a subsequent test Google Model Zone model is feasible.
Generating intermediate xml file

It should be noted that, Intel is the Official Guide to ./mo_tf.py here should be wrong (after mo_tf.py to add execute permissions might be, did not try, in addition to direct python3 mo_tf.py --input_model ssd_inception_v2_coco_2018_01_28 / frozen_inference_graph. pb will complain, plus input_shape error will only use json + pipeline.config currently not being given)

python3 mo_tf.py --input_model ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb --output=detection_boxes,detection_scores,num_detections --tensorflow_use_custom_operations_config /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json --tensorflow_object_detection_api_pipeline_config /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/ssd_inception_v2_coco_2018_01_28/pipeline.config

Output is as follows:

Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb
	- Path for generated IR: 	/home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/.
	- IR output name: 	frozen_inference_graph
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	detection_boxes,detection_scores,num_detections
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	/home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/ssd_inception_v2_coco_2018_01_28/pipeline.config
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	/home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
Model Optimizer version: 	1.4.292.6ef7232d
/home/amax/anaconda3/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/./frozen_inference_graph.xml
[ SUCCESS ] BIN file: /home/amax/intel/computer_vision_sdk_2018.4.420/deployment_tools/model_optimizer/./frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 20.20 seconds. 

The generated files in the current directory:

  • frozen_inference_graph.xml
  • frozen_inference_graph.bin
  • frozen_inference_graph.mapping
    generated downloaded files can be downloaded here:
    Here Insert Picture Description

Using an intermediate file reasoning

Run the script as follows:

Note: The input image may be output in jpeg format but bmp format.

ssd_bin=/home/amax/inference_engine_samples_build/intel64/Release/object_detection_sample_ssd
network=/home/amax/intel/computer_vision_sdk/deployment_tools/ssd_detect/frozen_inference_graph.xml
${ssd_bin} -i example.bmp -m ${network} -d CPU 

Output:

[ INFO ] InferenceEngine:
        API version ............ 1.4
        Build .................. 17328
Parsing input parameters
[ INFO ] Files were added: 1
[ INFO ]     example.bmp
[ INFO ] Loading plugin

        API version ............ 1.4
        Build .................. lnx_20181004
        Description ....... MKLDNNPlugin
[ INFO ] Loading network files:
        /home/amax/intel/computer_vision_sdk/deployment_tools/ssd_detect/frozen_inference_graph.xml
        /home/amax/intel/computer_vision_sdk/deployment_tools/ssd_detect/frozen_inference_graph.bin
[ INFO ] Preparing input blobs
[ INFO ] Batch size is 1
[ INFO ] Preparing output blobs
[ INFO ] Loading model to the plugin
[ WARNING ] Image is resized from (640, 747) to (300, 300)
[ INFO ] Batch size is 1
[ INFO ] Start inference (1 iterations)
[ INFO ] Processing output blobs
[0,1] element, prob = 0.912171    (28.3366,4.0617)-(640,743.86) batch id : 0 WILL BE PRINTED!
[ INFO ] Image out_0.bmp created!

total inference time: 28.7588
Average running time of one iteration: 28.7588 ms

Throughput: 34.772 FPS

[ INFO ] Execution successful

Original:
Here Insert Picture Description
output
Here Insert Picture Description

Testing by model zone

Published 65 original articles · won praise 26 · Views 100,000 +

Guess you like

Origin blog.csdn.net/bleedingfight/article/details/86259268