【ubuntu20.04上openvino安装及环境配置】


原文链接

一,安装及配置

1.下载英特尔® Distribution of OpenVINO™ toolkit package 安装包

choice1:去官网下载

Download Intel® Distribution of OpenVINO™ Toolkit

版本选如下版本
在这里插入图片描述

2.解压安装包(以下皆以l_openvino_toolkit_p_2021.4.752为例)

tar -xvzf l_openvino_toolkit_p_2021.4.752.tgz

3.来到l_openvino_toolkit_p_2021.4.752目录

cd l_openvino_toolkit_p_2021.4.752

4.使用图形用户界面 (GUI) 安装向导

sudo ./install_GUI.sh 

( 如果你已经有opencv,那么在勾选安装产品时可以选择不安装opencv,会造成高版本的opencv与低版本的opencv冲突)

扫描二维码关注公众号,回复: 14349035 查看本文章

5.安装外部依赖:

cd /opt/intel/openvino_2021/install_dependencies
sudo -E ./install_openvino_dependencies.sh

6.配置环境

gedit ~/.bashrc

将下述命令行添加至最后一行

source /opt/intel/openvino_2021/bin/setupvars.sh

7.验证 新打开一个终端,看见[setupvars.sh] OpenVINO environment initialized.证明成功。

cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
sudo ./install_prerequisites_onnx.sh

至此环境配置成功。

二,测试

第一个测试是使用caffe的squeezenet模型进行预测,进行该测试过程中需要联网下载一些资源,注意,如果在上一步没有安装caffe框架相关的pyhton包,需要pip install安装一下。
首先进入demo目录:

cd ~/intel/openvino/deployment_tools/demo

执行第二脚本demo_security_barrier_camera.sh

./demo_security_barrier_camera.sh

运行成功后会显示以下结果:
在这里插入图片描述

三,OpenVINO™工具套件转换

安装好OpenVINO™工具套件后,我们需要使用OpenVINO™工具套件的模型优化器(Model Optimizer)将ONNX文件转换成IR(Intermediate Representation)文件。

首先设置 OpenVINO™工具套件的环境和变量:

source /opt/intel/openvino_2021/bin/setupvars.sh

在这里插入图片描述

然后运行如下脚本,实现ONNX模型到IR文件(.xml和.bin)的转换:

python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model runs/exp5/weights/best.onnx --model_name yolov5s_best -s 255   --reverse_input_channels --output Conv_487,Conv_471,Conv_455 

报错(其实这里的格式也有问题)
![在这里插入图片描述](https://img-blog.csdnimg.cn/7e8543ddf84f4405b4c23fda4881c540.png#pic_center

pip3 install networkx

OK,成功了!
在这里插入图片描述注:如果你想部署在树莓派上,要加上这个参数

--data_type FP16

四, OpenVINO™工具套件转换

安装好OpenVINO™工具套件后,我们需要使用OpenVINO™工具套件的模型优化器(Model Optimizer)将ONNX文件转换成IR(Intermediate Representation)文件。

首先设置 OpenVINO™工具套件的环境和变量:

source /opt/intel/openvino_2021/bin/setupvars.sh

然后运行如下脚本,实现ONNX模型到IR文件(.xml和.bin)的转换:

python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model runs/exp6/weights/best.onnx --model_name yolov5s_best -s 255 --reverse_input_channels --output Conv_487,Conv_471,Conv_455

关于命令行的参数用法,更多细节可参考:https://docs.openvinotoolkit.org/cn/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html

转换成功后,即可得到yolov5s_best.xml 和 yolov5s_best.bin文件。

在这里插入图片描述

五、使用OpenVINO™工具套件进行推理部署

1 安装Python版的OpenVINO™工具套件

这里使用Python进行推理测试。因为我上面采用apt的方式安装OpenVINO™工具套件,这样安装后Python环境中并没有OpenVINO™工具套件,所以我这里需要用pip安装一下OpenVINO™工具套件。

注:如果你是编译源码等方式进行安装的,那么可以跳过这步:

pip install openvino

另外,安装时要保持版本的一致性:
在这里插入图片描述

2 OpenVINO™工具套件实测

OpenVINO™工具套件官方提供了YOLOv3版本的Python推理demo,可以参考:

https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/object_detection_demo/python/object_detection_demo.py

我们这里参考这个已经适配好的YOLOv5版本:https://github.com/violet17/yolov5_demo/blob/main/yolov5_demo.py,该源代码的输入数据是camera或者video,所以我们可以将test数据集中的图像转换成视频(test.mp4)作为输入,或者可以自行修改成图像处理的代码。

其中YOLOv5版本相对于官方YOLOv3版本的主要修改点:

  1. 自定义letterbox函数,预处理输入图像:
def letterbox(img, size=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):
    # Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232
    shape = img.shape[:2]  # current shape [height, width]
    w, h = size
 
 
    # Scale ratio (new / old)
    r = min(h / shape[0], w / shape[1])
    if not scaleup:  # only scale down, do not scale up (for better test mAP)
        r = min(r, 1.0)
 
 
    # Compute padding
    ratio = r, r  # width, height ratios
    new_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))
    dw, dh = w - new_unpad[0], h - new_unpad[1]  # wh padding
    if auto:  # minimum rectangle
        dw, dh = np.mod(dw, 64), np.mod(dh, 64)  # wh padding
    elif scaleFill:  # stretch
        dw, dh = 0.0, 0.0
        new_unpad = (w, h)
        ratio = w / shape[1], h / shape[0]  # width, height ratios
 
 
    dw /= 2  # divide padding into 2 sides
    dh /= 2
 
 
    if shape[::-1] != new_unpad:  # resize
        img = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)
    top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))
    left, right = int(round(dw - 0.1)), int(round(dw + 0.1))
    img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color)  # add border
 
 
    top2, bottom2, left2, right2 = 0, 0, 0, 0
    if img.shape[0] != h:
        top2 = (h - img.shape[0])//2
        bottom2 = top2
        img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color)  # add border
    elif img.shape[1] != w:
        left2 = (w - img.shape[1])//2
        right2 = left2
        img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color)  # add border
    return img
  1. 自定义parse_yolo_region函数, 使用Sigmoid函数的YOLO Region层 :
def parse_yolo_region(blob, resized_image_shape, original_im_shape, params, threshold):
    # ------------------------------------------ Validating output parameters ------------------------------------------    
    out_blob_n, out_blob_c, out_blob_h, out_blob_w = blob.shape
    predictions = 1.0/(1.0+np.exp(-blob)) 
                   
    assert out_blob_w == out_blob_h, "Invalid size of output blob. It sould be in NCHW layout and height should " \
                                     "be equal to width. Current height = {}, current width = {}" \
                                     "".format(out_blob_h, out_blob_w)
 
 
    # ------------------------------------------ Extracting layer parameters -------------------------------------------
    orig_im_h, orig_im_w = original_im_shape
    resized_image_h, resized_image_w = resized_image_shape
    objects = list()
 
    side_square = params.side * params.side
 
 
    # ------------------------------------------- Parsing YOLO Region output -------------------------------------------
    bbox_size = int(out_blob_c/params.num) #4+1+num_classes
 
 
    for row, col, n in np.ndindex(params.side, params.side, params.num):
        bbox = predictions[0, n*bbox_size:(n+1)*bbox_size, row, col]
        
        x, y, width, height, object_probability = bbox[:5]
        class_probabilities = bbox[5:]
        if object_probability < threshold:
            continue
        x = (2*x - 0.5 + col)*(resized_image_w/out_blob_w)
        y = (2*y - 0.5 + row)*(resized_image_h/out_blob_h)
        if int(resized_image_w/out_blob_w) == 8 & int(resized_image_h/out_blob_h) == 8: #80x80, 
            idx = 0
        elif int(resized_image_w/out_blob_w) == 16 & int(resized_image_h/out_blob_h) == 16: #40x40
            idx = 1
        elif int(resized_image_w/out_blob_w) == 32 & int(resized_image_h/out_blob_h) == 32: # 20x20
            idx = 2
 
 
        width = (2*width)**2* params.anchors[idx * 6 + 2 * n]
        height = (2*height)**2 * params.anchors[idx * 6 + 2 * n + 1]
        class_id = np.argmax(class_probabilities)
        confidence = object_probability
        objects.append(scale_bbox(x=x, y=y, height=height, width=width, class_id=class_id, confidence=confidence,
                                  im_h=orig_im_h, im_w=orig_im_w, resized_im_h=resized_image_h, resized_im_w=resized_image_w))
    return objects
  1. 自定义scale_bbox函数,进行边界框后处理 :
def scale_bbox(x, y, height, width, class_id, confidence, im_h, im_w, resized_im_h=640, resized_im_w=640):
    gain = min(resized_im_w / im_w, resized_im_h / im_h)  # gain  = old / new
    pad = (resized_im_w - im_w * gain) / 2, (resized_im_h - im_h * gain) / 2  # wh padding
    x = int((x - pad[0])/gain)
    y = int((y - pad[1])/gain)
 
 
    w = int(width/gain)
    h = int(height/gain)
 
    xmin = max(0, int(x - w / 2))
    ymin = max(0, int(y - h / 2))
    xmax = min(im_w, int(xmin + w))
    ymax = min(im_h, int(ymin + h))
    # Method item() used here to convert NumPy types to native types for compatibility with functions, which don't
    # support Numpy types (e.g., cv2.rectangle doesn't support int64 in color parameter)
    return dict(xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, class_id=class_id.item(), confidence=confidence.item())

但在实际测试中,会出现这个问题 ‘openvino.inference_engine.ie_api.IENetwork’ object has no attribute ‘layers’ :

[ INFO ] Creating Inference Engine… [ INFO ] Loading network files:
yolov5/yolov5s_best.xml yolov5/yolov5s_best.bin yolov5_demo.py:233:
DeprecationWarning: Reading network using constructor is deprecated.
Please, use IECore.read_network() method instead net =
IENetwork(model=model_xml, weights=model_bin) Traceback (most recent
call last): File “yolov5_demo.py”, line 414, in
sys.exit(main() or 0) File “yolov5_demo.py”, line 238, in main
not_supported_layers = [l for l in net.layers.keys() if l not in
supported_layers] AttributeError:
‘openvino.inference_engine.ie_api.IENetwork’ object has no attribute
‘layers’

经过我调研后才得知,在OpenVINO™工具套件2021.02及以后版本, ‘ie_api.IENetwork.layers’ 就被官方删除了:
在这里插入图片描述

所以需要将第327、328行的内容:

out_blob = out_blob.reshape(net.layers[layer_name].out_data[0].shape) 
layer_params = YoloParams(net.layers[layer_name].params, out_blob.shape[2])

修改为:

out_blob = out_blob.reshape(net.outputs[layer_name].shape)
   params = [x._get_attributes() for x in function.get_ordered_ops() if x.get_friendly_name() == layer_name][0]
   layer_params = YoloParams(params, out_blob.shape[2])

并在第322行下面新添加一行代码:

function = ng.function_from_cnn(net)

最终在终端,输入下面命令:

python yolov5_demo.py -m yolov5/yolov5s_best.xml test.mp4

加上后处理,使用OpenVINO™工具套件的推理时间平均在220ms左右,测试平台为英特尔® 酷睿™ i5-7300HQ,而使用PyTorch CPU版本的推理时间平均在1.25s,可见OpenVINO™工具套件加速明显!

最终检测结果如下:
在这里插入图片描述

如果你想在CPU上实现模型的快速推理,可以试试OpenVINO™工具套件哦~

猜你喜欢

转载自blog.csdn.net/weixin_42483745/article/details/125314294