yolov8 series [4]-yolov8 model deployment jetson platform
jetson platform
0.Installation environment
To download torch and torchvision, refer to PyTorch official installation command collection.
The version I use is
torch-1.10.0-cp37-cp37m-linux_aarch64.whl
torchvision-0.11.0-cp37-cp37m-linux_aarch64.whl
1. Download source code
Download: Deepstream-yolo
Download: ultralytics
Copy DeepStream-Yolo/utils/ export_yoloV8.py
to ultralytics
the root directory
cp DeepStream-Yolo/utils/gen_wts_yoloV8.py ultralytics
2. .pt
Convert model to .onnx
model
- conversion script
python export_yoloV8.py -w drone_yolov8m_best.pt --opset=12
Execute the above script to get labels.txt
, drone_yolov8m_best.onnx
- If you encounter a problem, use the following script to convert and an error will be reported. Please add
--opset=12
a solution.
python export_yoloV8.py -w drone_yolov8m_best.pt
3. Place deepstream_yolo
- Generate lib library
CUDA_VER=11.4 make -C nvdsinfer_custom_impl_Yolo
- Configuration
config_infer_primary_yoloV8
Modifyconfig_infer_primary_yoloV8.txt
related configurations
Execute script generationlibrary
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
onnx-file=drone_yolov8m_best.onnx
model-engine-file=drone_yolov8m.onnx_b1_gpu0_fp32.engine
#int8-calib-file=calib.table
labelfile-path=labels_drone.txt
batch-size=1
network-mode=0
num-detected-classes=1
interval=0
gie-unique-id=1
process-mode=1
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
symmetric-padding=1
parse-bbox-func-name=NvDsInferParseYolo
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
[class-attrs-all]
nms-iou-threshold=0.45
pre-cluster-threshold=0.25
topk=300
4. Run
deepstream-app -c deepstream_app_config_yolov8_drone.txt
Instructions: Deploy YOLOv8 on NVIDIA Jetson using TensorRT and DeepStream SDK