Install TensorRT on win11 (python, c++ (cmake, vs2019) deployment)

CUDA, CUDNN, corresponding versions of pytorch and other environments have been installed by default (just pay attention to the corresponding versions)

The key point is to use cmake to load the cuda and tensorRt libraries.

TensorRT download official website:

https://developer.nvidia.com/nvidia-tensorrt-download

Just download the 8 GA version (stable version).

Unzip the downloaded compressed package and put it in a suitable directory. For example: C:\Program Files\TensorRT-8.5.1.7

Add the absolute path of the lib in the unzipped directory to the environment variable.

Copy the dll file in the lib directory to the CUDA directory: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.1\bin.

  • If it is VS2019, it is much simpler. Just choose to import the project, then load the sln file and run.

  •  If you use cmake such as vscode or clion to build the environment, just follow the cmakelist below.
cmake_minimum_required(VERSION 3.25)
project(tensorRtTest)
set(CMAKE_CXX_STANDARD 11)

set(TENSORRT_ROOT "C:/Program Files/TensorRT-8.5.1.7")

# 包含CUDA的头文件路径
find_package(CUDA REQUIRED)
include_directories(${CUDA_INCLUDE_DIRS})

# 包含TensorRT的头文件路径
include_directories(${TENSORRT_ROOT}/include)

# 添加可执行文件
add_executable(tensorRtTest main.cpp)

# 链接TensorRT的库文件
link_directories(${TENSORRT_ROOT}/lib)
target_link_libraries(tensorRtTest ${TENSORRT_LIBRARIES})

# 链接CUDA的库文件
target_link_libraries(tensorRtTest ${CUDA_LIBRARIES})

 CUDA can be found directly through find-package.

TensorRT cannot be found, and you need to set the path.

#include "NvInfer.h" You can import it and see if it can jump.

  • For python, you need to select the corresponding python compiler version in tensorrt, and then pip install the wheel file.

Guess you like

Origin blog.csdn.net/qq_23172985/article/details/131134700