Install tf2onnx and onnxruntime
pip install onnxruntime
pip install tf2onnx
The steps to convert tf to onnx are as follows:
- Freeze the tf dynamic graph and generate a frozen pb file
- Use tf2onnx to convert pb files to onnx files
Freeze the tf dynamic graph using the following code:
def export_frozen_graph(model, model_dir, name_pb) :
f = tf.function(lambda x: model(inputs=x))
f = f.get_concrete_function(x=(tf.TensorSpec(model.inputs[0].shape, model.inputs[0].dtype)))
frozen_func = convert_variables_to_constants_v2(f)
frozen_func.graph.as_graph_def()
print("-" * 50)
print("Frozen model inputs: ")
print(frozen_func.inputs)
print("Frozen model outputs: ")
print(frozen_func.outputs)
tf.io.write_graph(graph_or_graph_def=frozen_func.graph,
logdir=model_dir,
name=name_pb,
as_text=False)
--inputs-as-nchw
When using tf2onnx to convert pb files to onnx files, it needs to be executed on the terminal. It should be pointed out that the input layout of most tf models is NHWC, while the input layout of ONNX models is NCHW, so it is recommended to add this option when converting. For other options, please refer to the documentation, which is very detailed. The specific running commands are as follows:
python -m tf2onnx.convert --input yolo.pb --output model.onnx --outputs Identity:0,Identity_1:0,Identity_2:0 --inputs x:0 --inputs-as-nchw x:0 --opset 10
Parameter Description:
- input input pb model
- output output onnx file name
- inputs The name of the input layer, when there are multiple inputs, separate them with commas
- outputs Output layer name, when there are multiple outputs, separate them with commas
- –inputs-as-nchw takes the input as nchw format, pay attention to add the name of the input layer
- --opset onnx version number
Transfer directly through the program:
tf2onnx.convert.from_keras(model, inputs_as_nchw=[model.inputs[0].name], output_path=model_filepath + 'yolo.onnx') --opset 10