Triton Server 快速入门

Fügen Sie hier eine Bildbeschreibung ein
官方文档

背景

  • 在工业场景中,常常阻碍模型部署的不是模型本身,而是算力原因,
  • 许多高精度的模型,都有一个比较大的参数量
  • Triton server 是英伟达Nvidia开源的高性能推理,可以在CPU、GPU上加速模型推理的一个工具

是什么

  • triton是一个模型推理服务工具
  • 具有动态批处理,并发执行,模型集成和串流输入,使用配置方式实现模型pipline
  • 使用脚本方式充当模型,以便使计算过程用在显存中
  • triton server 服务对外可以提供api-GRPC/HTTP,以及导出 Prometheus 指标,用于监控 GPU 利用率、延迟、内存使用率和推理吞吐量
  • 可以使用triton client 发送推理请求
  • Triton 支持一些主流加速推理框架ONNXRuntime、TensorFlow SavedModel 和 TensorRT 后端
  • Triton支持深度学习,机器学习,逻辑回归等学习模型
  • Triton 支持基于GPU,x86,ARM CPU,除此之外支持国产GCU(需要安装GCU的ONNXRUNTIME)
  • 模型可在生成环境中实时更新,无需重启Triton Server
  • Triton 支持对单个 GPU 显存无法容纳的超大模型进行多 GPU 以及多节点推理
  • 支持性能评估,包括GPU利用率、server吞吐量和server延迟时间

代码实例

准备onnx 模型

  • 我们需要提前训练好自己onnx模型文件,这里我们使用官方提供好的onnx模型文件
# Create model repository with placeholder for model and version 1
mkdir -p ./models/densenet_onnx/1

# Download model and place it in model repository
wget -O ./models/densenet_onnx/1/model.onnx https://contentmamluswest001.blob.core.windows.net/content/14b2744cf8d6418c87ffddc3f3127242/9502630827244d60a1214f250e3bbca7/08aed7327d694b8dbaee2c97b8d0fcba/densenet121-1.2.onnx

创建一个最小化的模型配置

vim ./models/densenet_onnx/config.pbtxt
name: "densenet_onnx"
backend: "onnxruntime"
max_batch_size: 0
input: [
  {
    
    
    name: "data_0",
    data_type: TYPE_FP32,
    dims: [ 1, 3, 224, 224]
  }
]
output: [
  {
    
    
    name: "fc6_1",
    data_type: TYPE_FP32,
    dims: [ 1, 1000, 1, 1 ]
  }
]

这里定义模型的输入输出可以通过netron工具可视化查看该模型的输入输出参数

拉取两个官方镜像

docker pull nvcr.io/nvidia/tritonserver:23.02-py3  # triton server
docker pull nvcr.io/nvidia/tritonserver:23.02-py3-sdk # triton client

启动triton server

  • 启动容器
# Start server container in the background
docker run -d --gpus=all --network=host -v $PWD:/mnt --name triton-server nvcr.io/nvidia/tritonserver:23.02-py3 bash
  • 进入容器并运行
[~]# tritonserver --model-repository=/mnt/models --model-control-mode=poll
I0403 06:07:10.866992 1186 server.cc:522] 
+------------------+------+
| Repository Agent | Path |
+------------------+------+
+------------------+------+

I0403 06:07:10.867083 1186 server.cc:549] 
+-------------+-------------------------------------------------------------------------+--------+
| Backend     | Path                                                                    | Config |
+-------------+-------------------------------------------------------------------------+--------+
| pytorch     | /opt/tritonserver/backends/pytorch/libtriton_pytorch.so                 | {
    
    }     |
| tensorflow  | /opt/tritonserver/backends/tensorflow1/libtriton_tensorflow1.so         | {
    
    }     |
| onnxruntime | /opt/tritonserver/backends/onnxruntime/libtriton_onnxruntime.so         | {
    
    }     |
| openvino    | /opt/tritonserver/backends/openvino_2021_2/libtriton_openvino_2021_2.so | {
    
    }     |
+-------------+-------------------------------------------------------------------------+--------+

I0403 06:07:10.867131 1186 server.cc:592] 
+---------------+---------+--------+
| Model         | Version | Status |
+---------------+---------+--------+
| densenet_onnx | 2       | READY  |
+---------------+---------+--------+

I0403 06:07:10.947730 1186 metrics.cc:623] Collecting metrics for GPU 0: NVIDIA GeForce RTX 3090
I0403 06:07:10.947760 1186 metrics.cc:623] Collecting metrics for GPU 1: NVIDIA GeForce RTX 3090
I0403 06:07:10.947772 1186 metrics.cc:623] Collecting metrics for GPU 2: NVIDIA GeForce RTX 3090
I0403 06:07:10.947784 1186 metrics.cc:623] Collecting metrics for GPU 3: NVIDIA GeForce RTX 3090
I0403 06:07:10.947800 1186 metrics.cc:623] Collecting metrics for GPU 4: NVIDIA GeForce RTX 3090
I0403 06:07:10.947819 1186 metrics.cc:623] Collecting metrics for GPU 5: NVIDIA GeForce RTX 3090
I0403 06:07:10.947852 1186 metrics.cc:623] Collecting metrics for GPU 6: NVIDIA GeForce RTX 3090
I0403 06:07:10.947886 1186 metrics.cc:623] Collecting metrics for GPU 7: NVIDIA GeForce RTX 3090
I0403 06:07:10.949215 1186 tritonserver.cc:1932] 
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Option                           | Value                                                                                                                                                                                        |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| server_id                        | triton                                                                                                                                                                                       |
| server_version                   | 2.19.0                                                                                                                                                                                       |
| server_extensions                | classification sequence model_repository model_repository(unload_dependents) schedule_policy model_configuration system_shared_memory cuda_shared_memory binary_tensor_data statistics trace |
| model_repository_path[0]         | /mnt/models                                                                                                                                                                                  |
| model_control_mode               | MODE_POLL                                                                                                                                                                                    |
| strict_model_config              | 1                                                                                                                                                                                            |
| rate_limit                       | OFF                                                                                                                                                                                          |
| pinned_memory_pool_byte_size     | 268435456                                                                                                                                                                                    |
| cuda_memory_pool_byte_size{
    
    0}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{
    
    1}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{
    
    2}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{
    
    3}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{
    
    4}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{
    
    5}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{
    
    6}    | 67108864                                                                                                                                                                                     |
| cuda_memory_pool_byte_size{
    
    7}    | 67108864                                                                                                                                                                                     |
| response_cache_byte_size         | 0                                                                                                                                                                                            |
| min_supported_compute_capability | 6.0                                                                                                                                                                                          |
| strict_readiness                 | 1                                                                                                                                                                                            |
| exit_timeout                     | 30                                                                                                                                                                                           |
+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

I0403 06:07:10.950873 1186 grpc_server.cc:4375] Started GRPCInferenceService at 0.0.0.0:8001
I0403 06:07:10.951176 1186 http_server.cc:3075] Started HTTPService at 0.0.0.0:8000
I0403 06:07:10.992539 1186 http_server.cc:178] Started Metrics Service at 0.0.0.0:8002

1. Das Protokoll zeigt, dass das Densenet_onnx-Modell geladen wurde und die GRPC-, HTTP- und Metrics-Schnittstellen gestartet wurden.
2. –model-control-mode=poll Dieser Parameter wird verwendet, um die Hot-Aktualisierung des Modells zu starten, wenn sich die Modelldatei ändert oder eine neue Version hinzugefügt wird. , startet das Programm zunächst die neue Instanzversion und deinstalliert dann die alte Version bzw. Instanz.

  • Modellversion hinzufügen
[~]# cp -rf 1 2
[~]# cp -rf 1 3

I0403 06:07:26.109494 1186 onnxruntime.cc:2400] TRITONBACKEND_ModelInitialize: densenet_onnx (version 3)
I0403 06:07:26.119616 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 0)
I0403 06:07:26.319224 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 1)
I0403 06:07:26.495285 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 2)
I0403 06:07:26.669370 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 3)
I0403 06:07:26.829762 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 4)
I0403 06:07:27.007662 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 5)
I0403 06:07:27.182506 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 6)
I0403 06:07:27.367420 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 7)
I0403 06:07:27.532531 1186 model_repository_manager.cc:1149] successfully loaded 'densenet_onnx' version 3
I0403 06:07:27.532561 1186 model_repository_manager.cc:1026] unloading: densenet_onnx:2
I0403 06:07:27.532729 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.548199 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.561028 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.573967 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.585593 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.596050 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.605498 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.614892 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:07:27.624120 1186 onnxruntime.cc:2423] TRITONBACKEND_ModelFinalize: delete model state
I0403 06:07:27.624158 1186 model_repository_manager.cc:1132] successfully unloaded 'densenet_onnx' version 2
I0403 06:14:42.551308 1186 model_repository_manager.cc:994] loading: densenet_onnx:3
I0403 06:14:42.651625 1186 onnxruntime.cc:2400] TRITONBACKEND_ModelInitialize: densenet_onnx (version 3)
I0403 06:14:42.659502 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 0)
I0403 06:14:42.851975 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 1)
I0403 06:14:43.027086 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 2)
I0403 06:14:43.203822 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 3)
I0403 06:14:43.378325 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 4)
I0403 06:14:43.552427 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 5)
I0403 06:14:43.732855 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 6)
I0403 06:14:43.903087 1186 onnxruntime.cc:2443] TRITONBACKEND_ModelInstanceInitialize: densenet_onnx (GPU device 7)
I0403 06:14:44.071766 1186 model_repository_manager.cc:1149] successfully loaded 'densenet_onnx' version 3
I0403 06:14:44.071795 1186 model_repository_manager.cc:1026] unloading: densenet_onnx:3
I0403 06:14:44.071970 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.081007 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.089658 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.098768 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.107905 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.116819 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.125697 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.134503 1186 onnxruntime.cc:2477] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I0403 06:14:44.143469 1186 onnxruntime.cc:2423] TRITONBACKEND_ModelFinalize: delete model state
I0403 06:14:44.143503 1186 model_repository_manager.cc:1132] successfully unloaded 'densenet_onnx' version 3

Die obigen Ergebnisse fügen die Versionen 2 und 3 hinzu und ändern schließlich die Triton-Serverversion in 3.

  • Screenshot der gesamten laufenden Instanz
    Fügen Sie hier eine Bildbeschreibung ein

Kunden-Stresstest

  • Client starten
docker run -itd --gpus=all --network=host -v $PWD:/mnt --name triton-client nvcr.io/nvidia/tritonserver:23.02-py3-sdk bash
  • Verwenden Sie die Stressmessung perf_analyzer
[~]# perf_analyzer -m densenet_onnx -u 127.0.0.1:8000 --concurrency-range 1:6

Inferences/Second vs. Client Average Batch Latency
Concurrency: 1, throughput: 96.1522 infer/sec, latency 10396 usec
Concurrency: 2, throughput: 197.181 infer/sec, latency 10138 usec
Concurrency: 3, throughput: 305.046 infer/sec, latency 9832 usec
Concurrency: 4, throughput: 425.759 infer/sec, latency 9392 usec
Concurrency: 5, throughput: 564.87 infer/sec, latency 8850 usec
Concurrency: 6, throughput: 704.574 infer/sec, latency 8514 usec

Werkzeuge zur Modellanalyse

  • Geben Sie den Triton-Server-Container ein
tritonserver --model-repository=/mnt/models --model-control-mode=explicit # 必须使用这个模式否则无法load/unload模型
  • Geben Sie den Triton-Client-Container ein
root@53:/mnt# cat config.yaml # 使用remote方式的原因,可以通过这种方式进行测试当我们不需要GPU时,来测试国产芯片的吞吐以及延迟
model_repository: /mnt/models
#checkpoint_directory: /mnt/checkpoints/
profile_models: densenet_onnx
triton_grpc_endpoint: 127.0.0.1:9001
triton_metrics_url: 127.0.0.1:9002
triton_launch_mode: remote

root@53:/mnt# rm -rf output_model_repository/ checkpoints/ && model-analyzer profile -f config.yaml  # 这里选择使用配置文件方式
[Model Analyzer] Initializing GPUDevice handles
[Model Analyzer] Using GPU 0 NVIDIA GeForce RTX 3090 with UUID GPU-b6d3bb44-b607-e9c1-c898-3977340c20a4
[Model Analyzer] Using GPU 1 NVIDIA GeForce RTX 3090 with UUID GPU-f37fdb1b-77c7-ff1f-21c0-e2db53fe0818
[Model Analyzer] Using GPU 2 NVIDIA GeForce RTX 3090 with UUID GPU-1a0e40f7-65eb-9694-f91c-253808416e71
[Model Analyzer] Using GPU 3 NVIDIA GeForce RTX 3090 with UUID GPU-c889529d-734f-8a13-f820-02597663a704
[Model Analyzer] Using GPU 4 NVIDIA GeForce RTX 3090 with UUID GPU-9f08b528-c421-bc60-2fc6-7f906e13404a
[Model Analyzer] Using GPU 5 NVIDIA GeForce RTX 3090 with UUID GPU-9c9fbba1-0558-4f8e-1534-8ff8e8b03a6c
[Model Analyzer] Using GPU 6 NVIDIA GeForce RTX 3090 with UUID GPU-55808174-5a3e-8082-8759-b248794a1e34
[Model Analyzer] Using GPU 7 NVIDIA GeForce RTX 3090 with UUID GPU-2a0fd91b-3ca8-8249-2d0c-70c7853491a6
[Model Analyzer] Using remote Triton Server
[Model Analyzer] WARNING: GPU memory metrics reported in the remote mode are not accurate. Model Analyzer uses Triton explicit model control to load/unload models. Some frameworks do not release the GPU memory even when the memory is not being used. Consider using the "local" or "docker" mode if you want to accurately monitor the GPU memory usage for different models.
[Model Analyzer] WARNING: Config sweep parameters are ignored in the "remote" mode because Model Analyzer does not have access to the model repository of the remote Triton Server.
[Model Analyzer] No checkpoint file found, starting a fresh run.
[Model Analyzer] Profiling server only metrics...
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=1
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=2
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=4
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=8
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=16
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=32
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=64
[Model Analyzer] Profiling densenet_onnx: client batch size=1, concurrency=128
[Model Analyzer] No longer increasing concurrency as throughput has plateaued
[Model Analyzer] Saved checkpoint to /mnt/checkpoints/0.ckpt
[Model Analyzer] Profile complete. Profiled 1 configurations for models: ['densenet_onnx']
[Model Analyzer] 
[Model Analyzer] WARNING: GPU output field "gpu_used_memory", has no data
[Model Analyzer] WARNING: GPU output field "gpu_utilization", has no data
[Model Analyzer] WARNING: GPU output field "gpu_power_usage", has no data
[Model Analyzer] WARNING: Server output field "gpu_used_memory", has no data
[Model Analyzer] WARNING: Server output field "gpu_utilization", has no data
[Model Analyzer] WARNING: Server output field "gpu_power_usage", has no data
[Model Analyzer] Exporting inference metrics to /mnt/results/metrics-model-inference.csv
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] Exporting Summary Report to /mnt/reports/summaries/densenet_onnx/result_summary.pdf
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] WARNING: Requested top 3 configs, but found only 1. Showing all available configs for this model.
[Model Analyzer] To generate detailed reports for the 1 best configurations, run `model-analyzer report --report-model-configs densenet_onnx --export-path /mnt --config-file config.yaml

root@53:/mnt# ls
checkpoints  config.yaml  models  models23  output_model_repository  plots  reports  results
root@53:/mnt# cat results/metrics-
metrics-model-inference.csv  metrics-server-only.csv      
root@53:/mnt# cat results/metrics-model-inference.csv 
Model,Batch,Concurrency,Model Config Path,Instance Group,Max Batch Size,Satisfies Constraints,Throughput (infer/sec),p99 Latency (ms)
densenet_onnx,1,16,densenet_onnx,8:GPU,0,Yes,1394.9,13.1
densenet_onnx,1,64,densenet_onnx,8:GPU,0,Yes,1384.2,50.0
densenet_onnx,1,32,densenet_onnx,8:GPU,0,Yes,1384.0,25.3
densenet_onnx,1,128,densenet_onnx,8:GPU,0,Yes,1331.5,104.6
densenet_onnx,1,8,densenet_onnx,8:GPU,0,Yes,1215.6,7.7
densenet_onnx,1,4,densenet_onnx,8:GPU,0,Yes,472.0,12.2
densenet_onnx,1,2,densenet_onnx,8:GPU,0,Yes,172.5,17.3
densenet_onnx,1,1,densenet_onnx,8:GPU,0,Yes,95.6,15.6
  • Erstellen Sie eine visuelle Dokumentation
root@53:/mnt# model-analyzer report --report-model-configs densenet_onnx --export-path /mnt --config-file config.yaml
[Model Analyzer] Loaded checkpoint from file /mnt/checkpoints/0.ckpt
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_power_usage' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_used_memory' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] WARNING: No GPU metric corresponding to tag 'gpu_utilization' found in the model's measurement. Possibly comparing measurements across devices.
[Model Analyzer] Exporting Detailed Report to /mnt/reports/detailed/densenet_onnx/detailed_report.pdf

Fügen Sie hier eine Bildbeschreibung ein

  • Profilparameter des Modellanalysators
root@sse-lg-113-53:/mnt# model-analyzer profile --help
usage: model-analyzer profile [-h] [-f CONFIG_FILE] [-s CHECKPOINT_DIRECTORY] [-i MONITORING_INTERVAL] [-d DURATION_SECONDS] [--collect-cpu-metrics] [--gpus GPUS] [--skip-summary-reports]
                              [-m MODEL_REPOSITORY] [--output-model-repository-path OUTPUT_MODEL_REPOSITORY_PATH] [--override-output-model-repository] [-r CLIENT_MAX_RETRIES]
                              [--client-protocol {
    
    http,grpc}] [--profile-models PROFILE_MODELS] [-b BATCH_SIZES] [-c CONCURRENCY] [--reload-model-disable] [--perf-analyzer-timeout PERF_ANALYZER_TIMEOUT]
                              [--perf-analyzer-cpu-util PERF_ANALYZER_CPU_UTIL] [--perf-analyzer-path PERF_ANALYZER_PATH] [--perf-output] [--perf-output-path PERF_OUTPUT_PATH]
                              [--perf-analyzer-max-auto-adjusts PERF_ANALYZER_MAX_AUTO_ADJUSTS] [--triton-launch-mode {
    
    local,docker,remote,c_api}] [--triton-docker-image TRITON_DOCKER_IMAGE]
                              [--triton-http-endpoint TRITON_HTTP_ENDPOINT] [--triton-grpc-endpoint TRITON_GRPC_ENDPOINT] [--triton-metrics-url TRITON_METRICS_URL] [--triton-server-path TRITON_SERVER_PATH]
                              [--triton-output-path TRITON_OUTPUT_PATH] [--triton-docker-mounts TRITON_DOCKER_MOUNTS] [--triton-docker-shm-size TRITON_DOCKER_SHM_SIZE]
                              [--triton-install-path TRITON_INSTALL_PATH] [--early-exit-enable] [--run-config-search-max-concurrency RUN_CONFIG_SEARCH_MAX_CONCURRENCY]
                              [--run-config-search-min-concurrency RUN_CONFIG_SEARCH_MIN_CONCURRENCY] [--run-config-search-max-instance-count RUN_CONFIG_SEARCH_MAX_INSTANCE_COUNT]
                              [--run-config-search-min-instance-count RUN_CONFIG_SEARCH_MIN_INSTANCE_COUNT] [--run-config-search-max-model-batch-size RUN_CONFIG_SEARCH_MAX_MODEL_BATCH_SIZE]
                              [--run-config-search-min-model-batch-size RUN_CONFIG_SEARCH_MIN_MODEL_BATCH_SIZE] [--run-config-search-mode {
    
    brute,quick}] [--run-config-search-disable]
                              [--run-config-profile-models-concurrently-enable] [-e EXPORT_PATH] [--filename-model-inference FILENAME_MODEL_INFERENCE] [--filename-model-gpu FILENAME_MODEL_GPU]
                              [--filename-server-only FILENAME_SERVER_ONLY] [--num-configs-per-model NUM_CONFIGS_PER_MODEL] [--num-top-model-configs NUM_TOP_MODEL_CONFIGS]
                              [--inference-output-fields INFERENCE_OUTPUT_FIELDS] [--gpu-output-fields GPU_OUTPUT_FIELDS] [--server-output-fields SERVER_OUTPUT_FIELDS] [--latency-budget LATENCY_BUDGET]
                              [--min-throughput MIN_THROUGHPUT]

optional arguments:
  -h, --help            show this help message and exit
  -f CONFIG_FILE, --config-file CONFIG_FILE
                        Path to Config File for subcommand 'profile'.
  -s CHECKPOINT_DIRECTORY, --checkpoint-directory CHECKPOINT_DIRECTORY
                        Full path to directory to which to read and write checkpoints and profile data.
  -i MONITORING_INTERVAL, --monitoring-interval MONITORING_INTERVAL
                        Interval of time between metrics measurements in seconds
  -d DURATION_SECONDS, --duration-seconds DURATION_SECONDS
                        Specifies how long (seconds) to gather server-only metrics
  --collect-cpu-metrics
                        Specify whether CPU metrics are collected or not
  --gpus GPUS           List of GPU UUIDs to be used for the profiling. Use 'all' to profile all the GPUs visible by CUDA.
  --skip-summary-reports
                        Skips the generation of analysis summary reports and tables.
  -m MODEL_REPOSITORY, --model-repository MODEL_REPOSITORY
                        Triton Model repository location
  --output-model-repository-path OUTPUT_MODEL_REPOSITORY_PATH
                        Output model repository path used by Model Analyzer. This is the directory that will contain all the generated model configurations
  --override-output-model-repository
                        Will override the contents of the output model repository and replace it with the new results.
  -r CLIENT_MAX_RETRIES, --client-max-retries CLIENT_MAX_RETRIES
                        Specifies the max number of retries for any requests to Triton server.
  --client-protocol {
    
    http,grpc}
                        The protocol used to communicate with the Triton Inference Server
  --profile-models PROFILE_MODELS
                        List of the models to be profiled
  -b BATCH_SIZES, --batch-sizes BATCH_SIZES
                        Comma-delimited list of batch sizes to use for the profiling
  -c CONCURRENCY, --concurrency CONCURRENCY
                        Comma-delimited list of concurrency values or ranges <start:end:step> to be used during profiling
  --reload-model-disable
                        Flag to indicate whether or not to disable model loading and unloading in remote mode.
  --perf-analyzer-timeout PERF_ANALYZER_TIMEOUT
                        Perf analyzer timeout value in seconds.
  --perf-analyzer-cpu-util PERF_ANALYZER_CPU_UTIL
                        Maximum CPU utilization value allowed for the perf_analyzer.
  --perf-analyzer-path PERF_ANALYZER_PATH
                        The full path to the perf_analyzer binary executable
  --perf-output         Enables the output from the perf_analyzer to a file specified by perf_output_path. If perf_output_path is None, output will be written to stdout.
  --perf-output-path PERF_OUTPUT_PATH
                        Path to the file to which write perf_analyzer output, if enabled.
  --perf-analyzer-max-auto-adjusts PERF_ANALYZER_MAX_AUTO_ADJUSTS
                        Maximum number of times perf_analyzer is launched with auto adjusted parameters in an attempt to profile a model.
  --triton-launch-mode {
    
    local,docker,remote,c_api}
                        The method by which to launch Triton Server. 'local' assumes tritonserver binary is available locally. 'docker' pulls and launches a triton docker container with the specified
                        version. 'remote' connects to a running server using given http, grpc and metrics endpoints. 'c_api' allows direct benchmarking of Triton locallywithout the use of endpoints.
  --triton-docker-image TRITON_DOCKER_IMAGE
                        Triton Server Docker image tag
  --triton-http-endpoint TRITON_HTTP_ENDPOINT
                        Triton Server HTTP endpoint url used by Model Analyzer client.
  --triton-grpc-endpoint TRITON_GRPC_ENDPOINT
                        Triton Server HTTP endpoint url used by Model Analyzer client.
  --triton-metrics-url TRITON_METRICS_URL
                        Triton Server Metrics endpoint url.
  --triton-server-path TRITON_SERVER_PATH
                        The full path to the tritonserver binary executable
  --triton-output-path TRITON_OUTPUT_PATH
                        The full path to the file to which Triton server instance will append their log output. If not specified, they are not written.
  --triton-docker-mounts TRITON_DOCKER_MOUNTS
                        A list of strings representing volumes to be mounted. The strings should have the format '<host path>:<container path>:<access mode>'.
  --triton-docker-shm-size TRITON_DOCKER_SHM_SIZE
                        The size of the /dev/shm for the triton docker container
  --triton-install-path TRITON_INSTALL_PATH
                        Path to Triton install directory i.e. the parent directory of 'lib/libtritonserver.so'.Required only when using triton_launch_mode=c_api.
  --early-exit-enable   Flag to indicate if Model Analyzer can skip some configurations when manually searching concurrency or max_batch_size
  --run-config-search-max-concurrency RUN_CONFIG_SEARCH_MAX_CONCURRENCY
                        Max concurrency value that run config search should not go beyond that.
  --run-config-search-min-concurrency RUN_CONFIG_SEARCH_MIN_CONCURRENCY
                        Min concurrency value that run config search should start with.
  --run-config-search-max-instance-count RUN_CONFIG_SEARCH_MAX_INSTANCE_COUNT
                        Max instance count value that run config search should not go beyond that.
  --run-config-search-min-instance-count RUN_CONFIG_SEARCH_MIN_INSTANCE_COUNT
                        Min instance count value that run config search should start with.
  --run-config-search-max-model-batch-size RUN_CONFIG_SEARCH_MAX_MODEL_BATCH_SIZE
                        Value for the model's max_batch_size that run config search will not go beyond.
  --run-config-search-min-model-batch-size RUN_CONFIG_SEARCH_MIN_MODEL_BATCH_SIZE
                        Value for the model's max_batch_size that run config search will start from.
  --run-config-search-mode {
    
    brute,quick}
                        The search mode for Model Analyzer to find and evaluate model configurations. 'brute' will brute force all combinations of configuration options. 'quick' will attempt to find a
                        near-optimal configuration as fast as possible, but isn't guaranteed to find the best.
  --run-config-search-disable
                        Disable run config search.
  --run-config-profile-models-concurrently-enable
                        Enable the profiling of all supplied models concurrently.
  -e EXPORT_PATH, --export-path EXPORT_PATH
                        Full path to directory in which to store the results
  --filename-model-inference FILENAME_MODEL_INFERENCE
                        Specifies filename for storing model inference metrics
  --filename-model-gpu FILENAME_MODEL_GPU
                        Specifies filename for storing model GPU metrics
  --filename-server-only FILENAME_SERVER_ONLY
                        Specifies filename for server-only metrics
  --num-configs-per-model NUM_CONFIGS_PER_MODEL
                        The number of configurations to plot per model in the summary.
  --num-top-model-configs NUM_TOP_MODEL_CONFIGS
                        Model Analyzer will compare this many of the top models configs across all models.
  --inference-output-fields INFERENCE_OUTPUT_FIELDS
                        Specifies column keys for model inference metrics table
  --gpu-output-fields GPU_OUTPUT_FIELDS
                        Specifies column keys for model gpu metrics table
  --server-output-fields SERVER_OUTPUT_FIELDS
                        Specifies column keys for server-only metrics table
  --latency-budget LATENCY_BUDGET
                        Shorthand flag for specifying a maximum latency in ms.
  --min-throughput MIN_THROUGHPUT
                        Shorthand flag for specifying a minimum throughput.

Gesamtarchitekturdiagramm

Fügen Sie hier eine Bildbeschreibung ein

Gesamtverarbeitungsablauf

  • Model Repository stellt das Model Warehouse dar, das lokal oder online sein kann. Es wird zum Speichern der Modelle verwendet, die Triton lesen wird (d. h. das Mount-Verzeichnis im Beispielcode);
  • Triton empfängt externe Anforderungen über das HTTP/gRPC-Protokoll oder empfängt externe Anforderungen direkt durch Aufrufen der C-API und sendet dann die Inferenzdaten der externen Anforderungen an jeden Modellplaner. Der Modellplaner optimiert und stapelt die Anforderungsdaten und verwaltet sie entsprechend dazu. Jeder Modelltyp sendet Daten an das entsprechende Backend;
  • Das Backend jedes Deep-Learning-Frameworks empfängt Anforderungsdaten, führt den Inferenzprozess durch und erhält Ergebnisse;
  • Triton gibt dann alle Inferenzergebnisse an den Client zurück;

entwickeln

  • Erweiterbar: Triton unterstützt das Backend der C-API, kann Vorverarbeitungs- und Nachverarbeitungsvorgänge anpassen und ist sogar mit neuen Deep-Learning-Frameworks kompatibel;
  • Modellverwaltungs-API: Die Modellverwaltungs-API kann von Triton verwaltete Modelle abfragen und steuern;
  • Statusüberwachung: Sie können in Echtzeit überprüfen, ob jedes Terminal bereit ist, läuft und seine Auslastung sowie den Gesamtdurchsatz und die Verzögerung;

Gleichzeitige Modellausführung

  • Die Triton-Struktur ermöglicht die gleichzeitige Ausführung mehrerer Modelle oder mehrerer Instanzen einzelner/mehrerer Modelle
    Fügen Sie hier eine Bildbeschreibung ein
  • Wenn ein von Triton verwaltetes Modell mehrere Anfragen gleichzeitig erhält, serialisiert und sortiert Triton diese Anfragen und führt jeweils nur eine aus.
    Fügen Sie hier eine Bildbeschreibung ein

Triton stellt ein Modellkonfigurationselement namens „Instance-Group“ bereit, mit dem Sie die Anzahl gleichzeitiger Instanzen angeben können, die für jedes Modell zulässig sind. Diese gleichzeitigen Modellnummern werden als Instanz bezeichnet. Standardmäßig platziert Triton ein Modell auf einer GPU und leitet jeweils nur ein Datenelement ab. Durch Festlegen des Parameters „instance_group“ des Modells kann jedoch die Menge gleichzeitiger Instanzdaten des Modells erweitert werden.
Fügen Sie hier eine Bildbeschreibung ein

Drei Modellplanungsstrategien

staatenlos

Staatsbürgerlich

Ensemble-Ensemble-Modell

  • Das Ensemble-Modell bedeutet, dass mehrere Modelle zur Verarbeitung in einer Pipeline zusammengefasst werden und die darin enthaltene Ausgabe des vorherigen Modells als Eingabe des nächsten Modells verwendet wird. Definieren Sie beispielsweise Bildvorverarbeitung -> Inferenz -> Bildnachbearbeitungsprozess
  • Das Ensemble-Modell muss den Ensemble-Scheduler verwenden. Es besteht keine Notwendigkeit, den Scheduler jedes vom Ensemble-Modell verwalteten Untermodells zu berücksichtigen. Ein Ensemble-Modell ist kein echtes Modell, sondern wird verwendet, um den Datenflusspfad zwischen den einzelnen Untermodellen anzugeben. Modell.
name: "ensemble_model"
platform: "ensemble"
max_batch_size: 1
input [
  {
    
    
    name: "IMAGE"
    data_type: TYPE_STRING
    dims: [ 1 ]
  }
]
output [
  {
    
    
    name: "CLASSIFICATION"
    data_type: TYPE_FP32
    dims: [ 1000 ]
  },
  {
    
    
    name: "SEGMENTATION"
    data_type: TYPE_FP32
    dims: [ 3, 224, 224 ]
  }
]
ensemble_scheduling {
    
    
  step [
    {
    
    
      model_name: "image_preprocess_model"
      model_version: -1
      input_map {
    
    
        key: "RAW_IMAGE"
        value: "IMAGE"
      }
      output_map {
    
    
        key: "PREPROCESSED_OUTPUT"
        value: "preprocessed_image"
      }
    },
    {
    
    
      model_name: "classification_model"
      model_version: -1
      input_map {
    
    
        key: "FORMATTED_IMAGE"
        value: "preprocessed_image"
      }
      output_map {
    
    
        key: "CLASSIFICATION_OUTPUT"
        value: "CLASSIFICATION"
      }
    },
    {
    
    
      model_name: "segmentation_model"
      model_version: -1
      input_map {
    
    
        key: "FORMATTED_IMAGE"
        value: "preprocessed_image"
      }
      output_map {
    
    
        key: "SEGMENTATION_OUTPUT"
        value: "SEGMENTATION"
      }
    }
  ]
}
  • Flussdiagramm
    Fügen Sie hier eine Bildbeschreibung ein

Musterlager

  • Beim Starten des Triton-Serverdienstes müssen wir den Pfad zum Laden des Modells angeben
  • tritonserver --model-repository=xxx

lokaler Pfad

  • tritonserver --model-repository=/path/to/model/repository

S3-Objektspeicher

  • tritonserver --model-repository=s3://bucket/path/to/model/repository

andere

  • Google Cloud Storage, Amazon S3 und Azure Storage

Modelltyp

  • TensorRT-Modell: model.plan
  • Onnx-Modell: model.onnx
  • TorchScript-Modell: model.pt
  • Tensorflow-Modell: model.graphdef oder model.savedmodel
  • OpenVINO-Modell: model.xml model.bin
  • Python-Modell: model.py
  • Dali-Modell: model.dali

Modellverwaltungsmodus

NONE-Modus

  • Triton启动时会尝试加载模型仓库中的所有模型,对于无法加载的模型,会标识为UNAVAILABLE且无法用于推理;
  • Triton运行过程中,对模型仓库的修改会被忽略。调用模型控制API发送的请求无效且会收到错误的Response;
  • 在启动Triton是设置–model-control-mode=none会启用本模式,None模式也是默认的模型管理模式。

EXPLICIT模式

  • Triton启动时仅会加载标注了–load-model命令行的模型,如果所有的模型都没有指定–load-model,那么就不会加载任何一个模型。对于无法加载的模型,会标识为UNAVAILABLE且无法用于推理;
  • Triton运行过程中,调用模型控制API可以进行模型的加载和卸载,返回的response的状态用于标识加载和卸载行为是否成功。当尝试重新加载一个已经加载了的模型时,如果重新加载失败了则不会对已加载的模型造成影响。如果重新加载成功了,则会使用新加载的模型。
  • 在启动Triton是设置–model-control-mode=explicit会启用本模式;

POLL模式

  • 在启动Triton是设置–model-control-mode=poll会启用本模式,通过设置–repository-poll-secs为非零值设置poll的时间间隔
  • 该模式是一个热加载模式,文件变动或者新增版本都可以触发重新加载新版本模型

模型配置说明

platform

  • triton的backend,表示使用什么类型推理

max_batch_size

input、output

  • 定义输入输出
max_batch_size: 0
input: [
  {
    
    
    name: "data_0",
    data_type: TYPE_FP32,
    dims: [ 1, 3, 224, 224]
  }
]
output: [
  {
    
    
    name: "prob_1",
    data_type: TYPE_FP32,
    dims: [ 1, 1000, 1, 1 ]
  }
]

name

  • 默认为model所在的文件夹的名字

version_policy

  • 指定可用模型的版本信息
  • 默认为version_policy: { latest: { num_versions: 1}},即使用最新的一个模型
  • model repo中的所有版本的模型都可用;version_policy: { all: {}}

instance_group

  • Legen Sie die Anzahl der Instanzen des Modells fest, um gleichzeitige Antworten auf Inferenzanforderungen für externe Daten zu unterstützen.
instance_group [
	{
    
    
	count: 2
	kind: KIND_GPU #每个GPU上创建2个instance
	
	}
]

instance_group [
	{
    
    
	count: 1
	kind: KIND_GPU
	gpus: [ 0 ]
	},
	{
    
    
	count: 2
	kind: KIND_GPU
	gpus: [ 1, 2 ]
	}
] #gpu0上创建一个instance,gpu1和gpu2上各创建两个instance

instance_group [
	{
    
    
	count: 2
	kind: KIND_CPU
	}
] #在CPU上创建两个instance



rate_limiter

instance_group [
    {
    
    
      count: 1
      kind: KIND_GPU
      gpus: [ 0, 1, 2 ]
      rate_limiter {
    
    
        resources [
          {
    
    
            name: "R1"
            count: 4
          },
          {
    
    
            name: "R2"
            global: True
            count: 2
          }
        ]
        priority: 2
      }
    }
  ]

Guess you like

Origin blog.csdn.net/xzpdxz/article/details/129858857