[TensorRT] TensorRT environment configuration

This article mainly records the environment configuration process of TensorRT8.6!

Official Documentation: NVIDIA TensorRT - NVIDIA Docs

Documentation for TensorRT related versions:  Documentation Archives :: NVIDIA Deep Learning TensorRT Documentation

1. Download CUDA and cudann

CUDA下载:CUDA Toolkit Archive | NVIDIA Developer

CUDA installation: ( I am using CUDA 11.0)


Baidu Netdisk (CUDA 11.0):

Link: https://pan.baidu.com/s/1ZpPkNRDtcbQURIEgpF7t5Q 
Extraction code: dn6q 


1. (Windows version) installation

① CUDA installation and testing

Double-click the downloaded exe file

 

Just install it by default all the way, and finally test it:

nvcc -V

 In this way, CUDA 11.0 is installed!

②cudann installation and testing 

cudann download: cuDNN Archive | NVIDIA Developer


Baidu network disk (cudann8.0.2):

Link: https://pan.baidu.com/s/13JDfexry0hP1GV0fgnbbBg 
Extraction code: r83z 
 


After the download is complete, it is a compressed file. After decompression, copy (bin, include, lib) to the cuda directory where the installation is successful.

My path is this C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0

 have a test:

 

 In this way, cudann is installed successfully!

2. Download and installation of TensorRT

Download link: TensorRT SDK | NVIDIA Developer


Baidu Netdisk (TensorRT8.6):

Link: https://pan.baidu.com/s/1KFkUFNZhNfj0Wo0fKSLbNg 

Extraction code: tec5 


1. Download TensorRT

1. Log in to your account first and then click Download

2. Select TensorRT 8

 

2. Install TensorRT

Decompress the downloaded TensorRT compressed package (I decompressed it to the D drive), so that the installation is actually completed, it is as simple as that!

 Next configure the environment variables:

D:\TensorRT-8.6.0.12\lib

3. Configure the VS2017 development environment

Note: It needs to be set to Release x64!

1. Contains directory configuration

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\include
D:\TensorRT-8.6.0.12\include

 2. Library directory

D:\TensorRT-8.6.0.12\lib
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.0\lib\x64

 3. Linker, additional dependencies

nvinfer.lib
nvinfer_dispatch.lib
nvinfer_lean.lib
nvinfer_plugin.lib
nvinfer_vc_plugin.lib
nvonnxparser.lib
nvparsers.lib
cublas.lib
cublasLt.lib
cuda.lib
cudadevrt.lib
cudart.lib
cudart_static.lib
cudnn.lib
cudnn_adv_infer.lib
cudnn_adv_infer64_8.lib
cudnn_adv_train.lib
cudnn_adv_train64_8.lib
cudnn_cnn_infer.lib
cudnn_cnn_infer64_8.lib
cudnn_cnn_train.lib
cudnn_cnn_train64_8.lib
cudnn_ops_infer.lib
cudnn_ops_infer64_8.lib
cudnn_ops_train.lib
cudnn_ops_train64_8.lib
cudnn64_8.lib
cufft.lib
cufftw.lib
curand.lib
cusolver.lib
cusolverMg.lib
cusparse.lib
nppc.lib
nppial.lib
nppicc.lib
nppidei.lib
nppif.lib
nppig.lib
nppim.lib
nppist.lib
nppisu.lib
nppitc.lib
npps.lib
nvblas.lib
nvjpeg.lib
nvml.lib
nvrtc.lib
OpenCL.lib

Test code:

#include <iostream>
#include "NvInfer.h"
#include "NvOnnxParser.h"

using namespace nvinfer1;
using namespace nvonnxparser;

class Logger : public ILogger
{
	void log(Severity severity, const char* msg) noexcept
	{
		if (severity != Severity::kINFO)
			std::cout << msg << std::endl;
	}
}gLogger;

int main(int argc, char** argv)
{
	auto builder = createInferBuilder(gLogger);
	builder->getLogger()->log(nvinfer1::ILogger::Severity::kERROR, "Create Builder ...");
	return 0;
}

 In this way, the Windows environment is built and configured!

Guess you like

Origin blog.csdn.net/qq_42108414/article/details/130225372