torch.onnx.export

 torch.onnx.export

torch.onnx.export(model, args, f, export_params=True, verbose=False, training=False, input_names=None, output_names=None, aten=False, export_raw_ir=False, operator_export_type=None, opset_version=None, _retain_param_name=True, do_constant_folding=False, example_outputs=None, strip_doc_string=True, dynamic_axes=None, keep_initializers_as_inputs=None)

x = torch.onnx.export(model,  # 网络模型
                torch.randn(1, 3, 224, 224), # 用于确定输入大小和类型,其中的值可以是随机的。
                export_onnx_file,  # 输出onnx的名称
                verbose=False,      # 是否以字符串的形式显示计算图
                input_names=["input"],  # 输入节点的名称,可以是一个list
                output_names=["output"], # 输出节点的名称
                opset_version=10,   # onnx 支持采用的operator set
                do_constant_folding=True, # 是否压缩常量
                #设置动态维度,此处指明input节点的第0维度可变,命名为batch_size
                dynamic_axes={"input":{0: "batch_size", 2: "h", 3: "w"}, "output":{0: "batch_size"}} 
                )

model (torch.nn.Module) – the model to export.
args (tuple of arguments) – the inputs to the model, any non-Tensor arguments will be hardcoded into the exported model; any Tensor arguments will become inputs to the exported model , and enter them in the order they appear in args. Because export runs the model, we need to provide an input tensor x. The values ​​in it can be random as long as they are of the correct type and size. Note that input dimensions will be fixed in the exported ONNX drawing for all input dimensions unless specified as dynamic axes. In this example we export the model with input batch_size 1, but then dynamic_axes it in torch.onnx.export(). Therefore, the exported model will accept inputs of size [batch_size, 3, 100, 100], where batch_size can be variable.
export_params (bool, default True) – If specified as True or default, the parameters will also be exported. If you want to export an untrained one, set it to False. verbose (bool, default False) – If specified, we will print
out A debug description of the exported trace.
training (bool, default False) - Export the model in training mode. Currently, ONNX exports models for inference only, so you generally don't need to set this to True.
input_names (list of strings, default empty list) – Assigns names to input nodes in the graph in sequence output_names
(list of strings, default empty list) – Assigns names to output nodes in the graph in sequence
dynamic_axes – {‘input’ : {0 : ‘batch_size’}, ‘output’ : {0 : ‘batch_size’}}
 

 

 onnx INT64 weights error

When TensorRT parses the onnx model exported by PyTorch:

Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

pip install onnx-simplifier
python -m onnxsim model_old.onnx model_sim_new.onnx

Guess you like

Origin blog.csdn.net/xihuanniNI/article/details/125467836