三、ONNX Runtime添加一个新的execution provider

ONNX Runtime添加一个新的execution provider

execution provider从本质上来讲就是一个针对不同硬件平台的executor,ONNX Runtime目前提供了以下

  • MLAS (Microsoft Linear Algebra Subprograms)

  • NVIDIA CUDA

  • Intel MKL-ML

  • Intel DNNL - subgraph optimization

  • Intel nGraph

  • NVIDIA TensorRT

  • Intel OpenVINO

  • Nuphar Model Compiler

  • DirectML

  • ACL (in preview, for ARM Compute Library)
    10种特定的execution provider,涵盖了ARM,Intel CPU,Nvidia的GPU,不同的操作系统,但是这对于目前纷繁的AI硬件市场远远不够,假如我们自己想添加一个自己定制平台的execution provider,就需要安装如下教程添加自己的execution provider。

  1. 在 onnxruntime/core/providers下创建一个新的文件夹,取个名字如your_provider_name。
  2. 在 include/onnxruntime/core/providers下创建一个新的文件夹, 名字应该和第一步一样为your_provider_name.
  3. 创建一个新的类,必须继承于IExecutionProvider。实现的源码需要放在’onnxruntime/core/providers/[your_provider_name]'目录下。
  4. 在 include/onnxruntime/core/providers/[your_provider_name]目录下创建一个头文件。这个有文件应该提供创建OrtProviderFactoryInterface的函数接口。你可以使用 'include/onnxruntime/core/providers/cpu/cpu_provider_factory.h’作为模板,需要注意的是创建 MemoryInfo函数不是必须的。
  5. 在 'onnxruntime/core/providers/[your_provider_name]'目录下面创建一个symbols.txt文件,该文件应该包含本execution provider导出的所有的函数名。通常只需一个函数即可创建execution provider工厂。
  6. 在onnxruntime_providers.cmake文件中增加你的execution provider编译代码,并编译成一个静态库。
  7. 在 cmake/onnxruntime.cmake增加一行target_link_libraries函数,使得onnxruntime能链接上你的execution provider库。
  • Create a folder under onnxruntime/core/providers
  • Create a folder under include/onnxruntime/core/providers, it should has the same name as the first step.
  • Create a new class, which must inherit from IExecutionProvider. The source code should be put in ‘onnxruntime/core/providers/[your_provider_name]’
  • Create a new header file under include/onnxruntime/core/providers/[your_provider_name]. The file should provide one function for creating an OrtProviderFactoryInterface. You may use ‘include/onnxruntime/core/providers/cpu/cpu_provider_factory.h’ as a template. You don’t need to provide a function for creating MemoryInfo.
  • Put a symbols.txt under ‘onnxruntime/core/providers/[your_provider_name]’. The file should contain all the function names that would be exported from you provider. Usually, just a single function for creating provider factory is enough.
  • Add your provider in onnxruntime_providers.cmake. Build it as a static lib.
  • Add one line in cmake/onnxruntime.cmake, to the ‘target_link_libraries’ function call. Put your provider there.

Examples:

Using the execution provider

  1. Create a factory for that provider, by using the c function you exported in ‘symbols.txt’
  2. Put the provider factory into session options
  3. Create session from that session option
    e.g.
  OrtEnv* env;
  OrtInitialize(ORT_LOGGING_LEVEL_WARNING, "test", &env)
  OrtSessionOptions* session_option = OrtCreateSessionOptions();
  OrtProviderFactoryInterface** factory;
  OrtCreateCUDAExecutionProviderFactory(0, &factory);
  OrtSessionOptionsAppendExecutionProvider(session_option, factory);
  OrtReleaseObject(factory);
  OrtCreateSession(env, model_path, session_option, &session);
发布了45 篇原创文章 · 获赞 21 · 访问量 5万+

猜你喜欢

转载自blog.csdn.net/xxradon/article/details/104100243