In Part 1 of this blog post series , we showed how to use the mmdetection framework to train an object detection model and fine-tune it on the BDD100K dataset. In Part 2, we'll walk through the process of converting a model to TensorRT and performing inference on an Nvidia GPU.
In Part 2 of this blog post series, we'll discuss the following topics:
Converting Models to TensorRT: We explain what TensorRT is and how to use it to optimize and accelerate inference of deep learning models on NVIDIA GPUs. We will also show how to convert a fine-tuned object detection model to TensorRT using the TensorRT Python API.
Inference with TensorRT: After the model is converted to TensorRT, we will demonstrate how to use it to