What are GPU, CUDA and cuDNN, and what is the relationship between them?

What are GPU, CUDA and cuDNN, and what is the relationship between them?

1. GPU

GPU, Graphics processing unit (Graphics processing unit), designed for parallel processing of data , can better render graphics and video, and is widely used.

GPUs have evolved to complement CPUs, which are designed to handle general-purpose tasks and have more complex control units . Although the CPU can improve performance through architectural innovation, faster clock speed and core increase, the GPU is specially designed to speed up image workloads and is mainly used to handle large-scale data calculation tasks that are not logical . It has advantages far beyond the CPU.

GPU and graphics card are often used to express the same concept, but there are certain differences between the two. A GPU is to a graphics card what a CPU is to a motherboard . A graphics card refers to an expansion board that integrates the GPU. The board also includes a large number of other components that allow the GPU to run and connect to other parts of the system. There are two types of GPUs: integrated and independent. The integrated GPU is embedded next to the CPU, while the independent GPU is a separate chip mounted on its own circuit board.

GPUs were originally designed to speed up the rendering of graphics, but over time, GPUs have become more flexible and programmable, allowing them to create more interesting visual effects and realistic scenes. Developers are also beginning to take advantage of the powerful capabilities of GPUs to dramatically accelerate computing work in areas such as deep learning.

2. CUDA

CUDA is a parallel computing platform and programming model specially developed by NVIDIA for general computing on GPU. With CUDA, developers can take advantage of the powerful performance of GPUs to significantly accelerate computing applications. In a GPU-accelerated application, the serial portion of the workload runs on the CPU, which is optimized for single-thread performance, while the compute-intensive portion of the application runs in parallel on thousands of GPU cores.

In other words, CUDA is a parallel computing platform. Using this platform interface, the parallel computing capability of GPU can be efficiently and flexibly utilized to complete large-scale data computing tasks .

3. cuDNN

cuDNN is a deep neural network library developed by NVIDIA , a GPU-accelerated deep neural network primitive library that can implement standard routines (such as forward and backward convolution, pooling layers, normalization and activation) in a highly optimized manner. layer).

With the help of cuDNN, high-performance GPU acceleration can be achieved, and researchers and developers can focus on training neural networks and developing software applications, without spending time on low-level GPU performance adjustments, and avoiding the need for each user to implement the underlying CUDA programming . If you use GPU to train the model, cuDNN is not necessary, but this acceleration library is generally used. cuDNN can accelerate widely used deep learning frameworks, including Caffe2, Keras, PaddlePaddle, PyTorch, TensorFlow, etc.

After understanding what GPU, CUDA, and cuDNN are, the relationship between the three becomes clear.
In the process of deep learning development on GPU, we use deep learning frameworks such as Pytorch to write code, and then the deep learning framework relies on cuDNN deep neural network. The network library uses the CUDA parallel computing platform to realize the accelerated operation of deep learning codes on high-performance GPUs . It can be seen that the deep learning framework depends on cuDNN -> cuDNN depends on CUDA -> CUDA depends on GPU .

Guess you like

Origin blog.csdn.net/eastking0530/article/details/126571543