Conversion between onnx and tensorflow formats, opencv directly calls the pd file for prediction, and pytorch converts to onnx

Introduction

onnx is an AI middleware created by Facebook, but Tensorflow does not officially support onnx, so you can only try to convert from tensorflow using the method provided by onnx itself

1. Tensorflow model to onnx

Convert Tensorflow to onnx. Onnx official github provides a conversion method at https://github.com/onnx/tutorials/blob/master/tutorials/OnnxTensorflowExport.ipynb. Follow the steps in the link to complete the mnist model conversion step by step, and I successfully converted the mnist.onnx model. But in the above steps, the execution of tf_rep = prepare(model) after model = onnx.load('mnist.onnx') has been unsuccessful. However, it is completely OK to execute tf_rep = prepare(model) with mnist.onnx transferred by others on the Internet using pytorch. The reason for this has not been found yet.

The onnx model is converted to the Tensorflow model. As
mentioned above, there is a problem with the implementation of tf_rep = prepare(model) on the onnx model generated from tensorflow conversion according to the tutorial on the official website. So here I downloaded a mnist onnx model converted by pytorch from the Internet as an experimental object. The onnx download address used for the experiment is: https://download.csdn.net/download/computerme/10448754
The code for converting the onnx model to a Tensorflow model is as follows:

import onnx
import numpy as np
from onnx_tf.backend import prepare

model = onnx.load('./assets/mnist_model.onnx')
tf_rep = prepare(model)

img = np.load("./assets/image.npz")
output = tf_rep.run(img.reshape([1, 1,28,28]))

print("outpu mat: \n",output)
print("The digit is classified as ", np.argmax(output))

import tensorflow as tf
with tf.Session() as persisted_sess:
    print("load graph")
    persisted_sess.graph.as_default()
    tf.import_graph_def(tf_rep.predict_net.graph.as_graph_def(), name='')
    inp = persisted_sess.graph.get_tensor_by_name(
        tf_rep.predict_net.tensor_dict[tf_rep.predict_net.external_input[0]].name
    )
    out = persisted_sess.graph.get_tensor_by_name(
        tf_rep.predict_net.tensor_dict[tf_rep.predict_net.external_output[0]].name
    )
    res = persisted_sess.run(out, {inp: img.reshape([1, 1,28,28])})
    print(res)
    print("The digit is classified as ",np.argmax(res))

tf_rep.export_graph('tf.pb')

After the conversion is completed, the converted tf.pb model needs to be verified. The verification method is as follows:

import numpy as np
import tensorflow as tf
from tensorflow.python.platform import gfile

name = "tf.pb"

with tf.Session() as persisted_sess:
    print("load graph")
    with gfile.FastGFile(name, 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    persisted_sess.graph.as_default()
    tf.import_graph_def(graph_def, name='')

    inp = persisted_sess.graph.get_tensor_by_name('0:0')
    out = persisted_sess.graph.get_tensor_by_name('LogSoftmax:0')
    #test = np.random.rand(1, 1, 28, 28).astype(np.float32)
    #feed_dict = {inp: test}

    img = np.load("./assets/image.npz")
    feed_dict = {inp: img.reshape([1, 1,28,28])}

    classification = persisted_sess.run(out, feed_dict)
    print(out)
    print(classification)

Reference address:
pytorch-onnx-tensorflow

2.opencv directly calls tensorflow's pd file for prediction

There are also tutorials on the Internet that first generate pbtxt files for pd files , and then forward them by loading these two files. It is basically for the detection network. The general operation of tensorflow is to export the training weights as a prediction map

The code to use the forecast map is as follows:

Net net2 = readNetFromTensorflow("final_model.pb"); //载入模型

net2.setPreferableBackend(DNN_BACKEND_CUDA);
net2.setPreferableTarget(DNN_TARGET_CUDA);//设置推理后台

Mat image = imread("color.png");

vector<Mat> images(1, image);
Mat inputBlob2 = blobFromImages(images, 1 / 255.F, Size(640, 640), Scalar(), true, false);

net2.setInput(inputBlob2);   //输入数据
Mat score;
net2.forward(score);   //前向传播
Mat segm;
colorizeSegmentation(score, segm);   //结果可视化

3. Convert pytorch to onnx
if __name__ == "__main__":

    outputonnx_name="temp/pytorch_efficientnet_cls.onnx"
    """
    使用pytorch自带的onnx模块输出onnx模型
    """
    print("Efficient B0 Summary")
    model = EfficientNet(1, 1)
    model.eval()
    x = torch.randn(1, 3, 224, 224,requires_grad=True)
    out_value=model(x)
    torch_out=torch.onnx._export(model,x,outputonnx_name,export_params=True,opset_version=10)
    """
    需要使用pip安装onnx,使用其来进行检测网络
    """
    import onnx
    # Load the ONNX model
    model = onnx.load(outputonnx_name)

    # Check that the IR is well formed
    onnx.checker.check_model(model)
    # Print a human readable representation of the graph
    res=onnx.helper.printable_graph(model.graph)
    print(res)

If its Efficientnet reports an error similar to squeeze(-1) in the conversion onnx, then its solution issue
Note:
Among them, opencv calls the onnx of the SE module will make an error , such as Efficientnet. The reason for the attention is that there are multiple branches, and the operation is multiplication. If it is addition, there is no problem. There is no solution for the time being,

Guess you like

Origin blog.csdn.net/yangdashi888/article/details/104198844