onnx onnxruntime onnxruntime-gpu

1. Introduction to onnx
Many different frameworks can be used when training models, such as Pytorch, TensorFLow, MXNet, and Caffe, which is the most popular in deep learning, etc. Such different training frameworks lead to different model result packages. In the model Different dependency libraries are required when deploying inference, and there are large differences between different versions of the same framework such as tensorflow.

In order to solve this confusing problem, the LF AI organization has teamed up with Facebook, MicroSoft and other companies to formulate a standard for machine learning models. This standard is called ONNX, Open Neural Network Exchage. All model packages (.pth, .pb) generated by other frameworks can be used. Convert to this standard format. After converting to this standard format, you can use unified ONNX Runtime and other tools for unified deployment. (Just like the intermediate files generated by Java can be run on the JVM, the onnx runtime engine provides inference functions for the generated onnx model files)

onnx homepage

onnxruntime.ai

reference:

onnx standard & onnxRuntime accelerated inference engine_createexecutionproviderinstance cuda_path is set b_Wang Xiaoxi ww's blog-CSDN blog

ONNX, ONNX Runtime, and TensortRT - Auriga IT 

Guess you like

Origin blog.csdn.net/linzhiji/article/details/132298582