八、ONNX Runtime的图优化方法说明

来源文档

ONNX Runtime的图优化方法 Graph Optimizations in ONNX Runtime

ONNX Runtime提供了各种图优化来改善模型性能。 图优化本质上是图级别的转换,包括小图简化、节点消除甚至是更复杂的节点融合和布局优化。
根据图的优化的复杂性和功能将其分为几类(或“级别”)。 它们可以在线离线执行。 在在线模式下,优化是在执行推理之前完成的;而在离线模式下,运行时会将优化后的图形保存到磁盘上。 ONNX Runtime提供Python,C#,C ++和C API,以启用不同的优化级别,并在脱机模式与在线模式之间进行选择。
下面,我们提供有关优化级别,在线/离线模式以及控制它们的各种API的详细信息。
ONNX Runtime provides various graph optimizations to improve model performance. Graph optimizations are essentially graph-level transformations, ranging from small graph simplifications and node eliminations to more complex node fusions and layout optimizations.

Graph optimizations are divided in several categories (or levels) based on their complexity and functionality. They can be performed either online or offline. In online mode, the optimizations are done before performing the inference, while in offline mode, the runtime saves the optimized graph to disk. ONNX Runtime provides Python, C#, C++, and C APIs to enable different optimization levels and to choose between offline vs. online mode.

Below we provide details on the optimization levels, the online/offline mode, and the various APIs to control them.

图优化级别Graph Optimization Levels

图形优化分为三个级别:

  • Basic 基础级别
  • Extended 扩展级别
  • Layout Optimizations 布局优化
    在应用当前级别的优化之前,会执行当前级别之前的优化(例如我们准备执行extended优化,Basic级别的优化会在执行extended优化之前先执行)。

默认情况下启用所有优化。

Graph optimizations are divided in three levels:

  • Basic
  • Extended
  • Layout Optimizations

The optimizations belonging to one level are performed after the optimizations of the previous level have been applied (e.g., extended optimizations are applied after basic optimizations have been applied).

All optimizations are enabled by default.

Basic图优化 Basic Graph Optimizations

这些是保留语义的图重写,可删除冗余节点和冗余计算。它们在大图分小图之前运行,因此适用于所有execution providers。可用的Basic图优化如下:

  • Constant常量折叠:静态地计算仅依赖常量初始化程序的图形部分。这样就无需在运行时计算它们。
  • 消除冗余节点:在不更改图形结构的情况下删除所有冗余节点。当前支持以下此类优化:
    • Identity Elimination
    • Slice Elimination
    • Unsqueeze Elimination
    • Dropout Elimination
  • 保留语义的节点融合:将多个节点融合/折叠为一个节点。例如,Conv Add融合会将Add运算符折叠为Conv运算符的偏差。当前支持以下此类优化:
    • Conv Add Fusion
    • Conv Mul Fusion
    • Conv BatchNorm Fusion
    • Relu Clip Fusion
    • Reshape Fusion

These are semantics-preserving graph rewrites which remove redundant nodes and redundant computation. They run before graph partitioning and thus apply to all the execution providers. Available basic graph optimizations are as follows:

扫描二维码关注公众号,回复: 9662420 查看本文章
  • Constant Folding: Statically computes parts of the graph that rely only on constant initializers. This eliminates the need to compute them during runtime.

  • Redundant node eliminations: Remove all redundant nodes without changing the graph structure. The following such optimizations are currently supported:

    • Identity Elimination
    • Slice Elimination
    • Unsqueeze Elimination
    • Dropout Elimination
  • Semantics-preserving node fusions : Fuse/fold multiple nodes into a single node. For example, Conv Add fusion folds the Add operator as the bias of the Conv operator. The following such optimizations are currently supported:

    • Conv Add Fusion
    • Conv Mul Fusion
    • Conv BatchNorm Fusion
    • Relu Clip Fusion
    • Reshape Fusion

Extended图优化 Extended Graph Optimizations

这些优化包括复杂的节点融合。它们在大图分小图之后运行,并且仅应用于分配给CPU或CUDA的execution providers的节点。可用的Extended图优化如下:

Optimization Execution Provider Comment
GEMM Activation Fusion cpu
Matmul Add Fusion cpu
Conv Activation Fusion cpu
GELU Fusion cpu or cuda
Layer Normalization Fusion cpu or cuda
BERT Embedding Layer Fusion cpu or cuda Fuse BERT embedding layer, layer normalization and attention mask length
Attention Fusion cpu or cuda Attention mask has approximation in cuda execution provider
Skip Layer Normalization Fusion cpu or cuda Fuse bias of fully connected layer, skip connection and layer normalization
Bias GELU Fusion cpu or cuda Fuse bias of fully connected layer and GELU activation
GELU Approximation cuda Erf is approximated by a formula using tanh function

为了优化BERT模型的推理性能,在cuda execution providers的GELU近似和Attention融合中使用了近似。 结果可能会略有不同。 根据我们的评估,可以忽略此种近似对准确性的影响:SQuAD v1.1上的BERT模型的F1得分几乎相同(87.05 vs 87.03)。

These optimizations include complex node fusions. They are run after graph partitioning and are only applied to the nodes assigned to the CPU or CUDA execution provider. Available extended graph optimizations are as follows:

Optimization Execution Provider Comment
GEMM Activation Fusion cpu
Matmul Add Fusion cpu
Conv Activation Fusion cpu
GELU Fusion cpu or cuda
Layer Normalization Fusion cpu or cuda
BERT Embedding Layer Fusion cpu or cuda Fuse BERT embedding layer, layer normalization and attention mask length
Attention Fusion cpu or cuda Attention mask has approximation in cuda execution provider
Skip Layer Normalization Fusion cpu or cuda Fuse bias of fully connected layer, skip connection and layer normalization
Bias GELU Fusion cpu or cuda Fuse bias of fully connected layer and GELU activation
GELU Approximation cuda Erf is approximated by a formula using tanh function

To optimize inference performance of BERT model, approximation is used in GELU approximation and Attention fusion for cuda execution provider. There might be slight difference in result. The impact on accuracy could be neglected based on our evaluation: F1 score for a BERT model on SQuAD v1.1 is almost same (87.05 vs 87.03).

Layout优化 Layout Optimizations

这些优化更改了适用节点的数据layout,以实现更高的性能改进。 它们在大图分小图之后运行,并且仅应用于分配给CPU execution providers的节点。 可用的布局优化如下:

  • NCHWc Optimizer: 使用NCHWc layout而不是NCHW layout.

These optimizations change the data layout for applicable nodes to achieve higher performance improvements. They are run after graph partitioning and are only applied to nodes assigned to CPU execution provider. Available layout optimizations are as follows:

  • NCHWc Optimizer: Optimizes the graph by using NCHWc layout instead of NCHW layout.

在线/离线模式选择 Online/Offline Mode

所有优化均可在线或离线模式下执行。 在在线模式下,当初始化inference session时,我们会在执行模型推理之前应用所有启用的图优化。 当然这有个显著的问题就是每次启动inference session时都应用所有优化,这样会增加模型初始化时间的开销(尤其是对于复杂模型),这在部署生产环境中可能会有非常大的影响。 但是离线模式下就可以带来避免初始化时间开销的优势, 离线模式下,执行图形优化后,ONNX Runtime会将生成的模型序列化到磁盘。 随后,当为此模型创建新的inference session时,我们可以改用已经优化的模型来减少启动时间。
Notes:

  • 在离线模式下运行时,请确保使用与模型推理将在其上运行的目标计算机完全相同的配置项(例如,execution providers,optimization level 优化等级)和硬件(例如,你不能在仅配备CPU的计算机上运行针对GPU的 execution providers 预先优化的模型)。
  • 启用layout优化后,保存离线模型时,只能在与环境兼容的硬件上使用离线模式。 例如,如果模型具有针对AVX2优化的布局,则离线模型将需要支持AVX2的CPU。

All optimizations can be performed either online or offline. In online mode, when initializing an inference session, we also apply all enabled graph optimizations before performing model inference. Applying all optimizations each time we initiate a session can add overhead to the model startup time (especially for complex models), which can be critical in production scenarios. This is where the offline mode can bring a lot of benefit. In offline mode, after performing graph optimizations, ONNX Runtime serializes the resulting model to disk. Subsequently, when new inference sessions are created for this model, we can instead use the already optimized model to reduce startup time.

Notes:

  • When running in offline mode, make sure to use the exact same options (e.g., execution providers, optimization level) and hardware as the target machine that the model inference will run on (e.g., you cannot run a model pre-optimized for a GPU execution provider on a machine that is equipped only with CPU).
  • When layout optimizations are enabled, the offline mode can only be used on compatible hardware to the environment when the offline model is saved. For example, if model has layout optimized for AVX2, the offline model would require CPUs that support AVX2.

使用说明 Usage

通用方法说明 General Note

Levels:
ONNX运行时定义了“ GraphOptimizationLevel”枚举,以确定将启用上述哪些优化级别。 选择一个级别将启用该级别的优化,以及所有先前级别的优化。 例如,启用扩展优化,也将启用基本优化。 这些级别到枚举的映射如下:

  • GraphOptimizationLevel::ORT_DISABLE_ALL -> 取消所有的 optimizations
  • GraphOptimizationLevel::ORT_ENABLE_BASIC -> 使能 basic optimizations
  • GraphOptimizationLevel::ORT_ENABLE_EXTENDED -> 使能 basic and extended optimizations
  • GraphOptimizationLevel::ORT_ENABLE_ALL -> 使能all available optimizations including layout optimizations

Online/Offline Mode:
要启用优化模型到磁盘的序列化,请将SessionOptions选项optimized_model_path设置为要存储优化模型的所需路径。

Levels:
ONNX Runtime defines the GraphOptimizationLevel enum to determine which of the aforementioned optimization levels will be enabled. Choosing a level enables the optimizations of that level, as well as the optimizations of all preceding levels. For example, enabling Extended optimizations, also enables Basic optimizations. The mapping of these levels to the enum is as follows:

  • GraphOptimizationLevel::ORT_DISABLE_ALL -> Disables all optimizations
  • GraphOptimizationLevel::ORT_ENABLE_BASIC -> Enables basic optimizations
  • GraphOptimizationLevel::ORT_ENABLE_EXTENDED -> Enables basic and extended optimizations
  • GraphOptimizationLevel::ORT_ENABLE_ALL -> Enables all available optimizations including layout optimizations

Online/Offline Mode:
To enable serialization of the optimized model to disk, set the SessionOptions option optimized_model_path to the desired path where the optimized model will be stored.

Python API Usage

import onnxruntime as rt

sess_options = rt.SessionOptions()

# Set graph optimization level
sess_options.graph_optimization_level = rt.GraphOptimizationLevel.ORT_ENABLE_EXTENDED

# To enable model serialization after graph optimization set this
sess_options.optimized_model_filepath = "<model_output_path\optimized_model.onnx>"

session = rt.InferenceSession("<model_path>", sess_options)

C API Example:

  const OrtApi* Ort::g_api = OrtGetApi(ORT_API_VERSION);
  OrtEnv* env;
  g_ort->CreateEnv(ORT_LOGGING_LEVEL_WARNING, "test", &env);
  OrtSessionOptions* session_options;
  g_ort->CreateSessionOptions(&session_options)

  // Set graph optimization level
  g_ort->SetSessionGraphOptimizationLevel(session_options, ORT_ENABLE_EXTENDED);

  // To enable model serialization after graph optimization set this
  const wchar_t* optimized_model_path = L"optimized_model_path";
  g_ort->SetOptimizedModelFilePath(session_options, optimized_model_path);

  OrtSession* session;
  const wchar_t* model_path = L"model_path";
  g_ort->CreateSession(env, model_path, session_option, &session);

C# API Example:

SessionOptions so = new SessionOptions();

// Set graph optimization level
so.GraphOptimizationLevel = GraphOptimizationLevel.ORT_ENABLE_EXTENDED;

// To enable model serialization after graph optimization set this
so.OptimizedModelFilePath = "model_output_path\optimized_model.onnx"

var session = new InferenceSession(modelPath, so);

C++ API Example:

Ort::SessionOptions session_options;

// Set graph optimization level
session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED);

// To enable model serialization after graph optimization set this
session_options.SetOptimizedModelFilePath("optimized_file_path");

auto session_ = Ort::Session(env, "model_file_path", session_options);
发布了45 篇原创文章 · 获赞 21 · 访问量 5万+

猜你喜欢

转载自blog.csdn.net/xxradon/article/details/104117617