Py's keras2onnx: a detailed guide to the introduction, installation and use of the keras2onnx library
Table of contents
Introduction to the keras2onnx library
Installation of keras2onnx library
How to use the keras2onnx library
Introduction to the keras2onnx library
The keras2onnx model converter enables users to convert Keras models to ONNX model format. Originally, the Keras converter was developed in the onnxmltools project. Development of the keras2onnx converter has been moved to a separate repository to support a wider variety of Keras models and reduce the complexity of mixing multiple converters.
Most common Keras layers support transformations. See the Keras documentation or the tf.keras documentation for details on Keras layers.
Windows machine learning (WinML) users can use WinMLTools, which wraps its calls on keras2onnx to convert Keras models. If you want to use the keras2onnx converter, please refer to the WinML release notes to determine the appropriate ONNX opset number for your version of WinML.
keras2onnx has been tested on Python 3.5-3.8, using tensorflow 1.x/2.0-2.2 (CI builds). It does not support Python 2.x.
Currently the project has stopped the active development of keras2onnx and frozen keras2onnx to tf-2.3 and onnx-1.10. To convert your Keras models, you can turn to tf2onnx, which can convert Tensorflow, Keras, Tflite and Tensorflow.js models. All keras2onnx unit tests have been added to the tf2onnx ci pipeline to ensure there are no unavoidable regressions. The tf2onnx api tf2onnx.convert.from_keras() is similar to the keras2onnx api, we want the transition to be painless.
GitHub link : GitHub - onnx/keras-onnx: Convert tf.keras/Keras models to ONNX
Installation of keras2onnx library
pip install keras2onnx
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple keras2onnx
How to use the keras2onnx library
1. Basic Usage
import onnx
import keras2onnx
# 将Keras模型转换为ONNX模型
onnx_model = keras2onnx.convert_keras(model, model.name)
# 保存ONNX模型
onnx.save_model(onnx_model, 'model.onnx')
import onnxruntime as rt
import numpy as np
# 加载ONNX模型
sess = rt.InferenceSession('model.onnx')
# 对X_val进行预测
X_val_np = np.array(X_val)
input_name = sess.get_inputs()[0].name
output_name = sess.get_outputs()[0].name
y_prob = sess.run([output_name], {input_name: X_val_np.astype(np.float32)})[0]
# 对比模型文件导出前后的预测结果
res_df['loaded_model_ONNX_y_prob'] = y_prob
res_df.to_csv('loaded_model_ONNX_y_prob.csv', index=False)
print(res_df)