Deep learning project deployment: parsing the CUDA image version in NVIDIA Docker: base version, runtime version, devel version

Recently, I had to deploy a deep learning project into a Docker environment, and in the process I couldn't avoid getting involved in the pitfalls of NVIDIA Docker. Although it is confirmed to be a very useful tool, the container is clean and empty, and it takes some time to configure. In this blog, I will record in detail the differences between different versions of CUDA image files in the nvidia/cuda image on Docker Hub.

CUDA image version overview

base version

Starting with CUDA 9.0, this release includes a pre-built deployment of the most basic CUDA application (libcudart). If users want to install the required CUDA packages themselves, they can choose to use this version of the image. However, if you want more convenience, it is recommended not to choose this mirror version, because it may introduce a lot of unnecessary trouble.

runtime version

This release extends the base image by adding all shared libraries from the CUDA toolkit. You can choose to use this image if you are using a pre-built application with multiple CUDA libraries. However, if you want to use the header files in CUDA to compile your own project, you may encounter file not found errors.

devel version

This release extends the runtime image by adding compiler toolchains, test tools, header files, and static libraries. It is recommended to select this image version if you want to compile CUDA applications from source code.

If you want to use the Docker image as a development environment, it is strongly recommended to choose the devel version of the image.

Mirror usage suggestions

When selecting a CUDA image version, you can make a reasonable choice based on project needs and development environment requirements. Here are some example usages:

  • Run an interactive CUDA session isolated to the first GPU:

    docker run -ti --rm --runtime=nvidia -e NVIDIA_VISIBLE_DEVICES=0 nvidia/cuda
    
  • Query the CUDA 7.5 compiler version:

    docker run --rm --runtime=nvidia nvidia/cuda:7.5-devel nvcc --version
    

Summarize

When using the runtime version of the package, please note that it does not have the CUDA compilation tool nvcc. Therefore, when choosing an image version, be sure to carefully weigh it against your specific needs to ensure that your Docker environment meets your project's development and deployment requirements.

Through this blog, I hope to help you better understand and choose the NVIDIA Docker CUDA image version suitable for your deep learning project. When configuring the Docker environment, properly selecting the image version is a key step to ensure the smooth progress of the project.

Guess you like

Origin blog.csdn.net/x1131230123/article/details/134978690