Use Docker in Linux to quickly build a Tensorflow-Gpu development environment

This article will introduce:

  • How to find the required Tensorflow-GPU image
  • Pull the image and view the local image in the Linux terminal
  • Use Docker to build Tensorflow-Gpu environment
  • Configure jupyter external access mapping
  • Check if it is a GPU environment

The prerequisite for viewing this article is that the local Docker environment is required. If the Docker environment has not been configured, you can first check the article configuration.

1. Download the Tensorflow image

1. Find the required Tensorflow image

Baidu search nvidia ngcand enter the NVIDIA GPU accelerated container , find the desired version of the Tensorflow image, and copy its corresponding one pull tag.insert image description here

2. Pull the image in the Linux terminal
docker pull nvcr.io/nvidia/tensorflow:21.07-tf2-py3
3. View the mirror image

(1), enter docker infoto view the location of the file download, and enter the directory:
insert image description here
(2), the storage address of the downloaded image:
insert image description here
(3), the mapping of the name:
repositories.jsonthe result docker imagesis the same as the number of images seen
insert image description here
after adding the image , repositories.jsontwo corresponding image ID records will be added, one image hubis the same as the one above, and the other docker imagesis the same as the local one. As shown in the figure below:
insert image description here
Detailed analysis of Docker image storage path

Second, use Docker to build the Tensorflow-Gpu environment

For the specific commands of the Docker container, please refer to:
docker run The container runs
the Docker run command

1. Use Docker to build the Tensorflow-Gpu environment
docker run --gpus all -d -it -p 【宿主机映射端口】:【容器内映射端口】 -v 【宿主机绝对地址】:【容器内绝对地址】 --name 【自定义容器名称】 -e 【全局变量key】=【全局变量value】 nvcr.io/nvidia/tensorflow:21.07-tf2-py3 bash

Note: When defining a mapped port, you need to determine whether the port is occupied! You can confirm it by the following command:

lsof -i:【端口号】
2. Enter the container
docker exec -it 【自定义的容器名称】 bash
3. Configure jupyter external access mapping
  • When using Docker to build the Tensorflow-Gpu environment, open the jupyter port 8888 to the external port of the host (the jupyter port can be defined by itself, but it must be consistent with the internal port of the container)
  • Modify jupyter_notebook_config.pythe file, you can read the user name, password, etc. from the environment variable. vim /root/.jupyter/jupyter_notebook_config.py
    The modified content can refer to the following code:
import os
from IPython.lib import passwd

c.NotebookApp.ip = '0.0.0.0'
c.NotebookApp.port = int(os.getenv('PORT', 8888))
c.NotebookApp.open_browser = False
c.MultiKernelManager.default_kernel_name = 'python3'

# sets a password if PASSWORD is set in the environment
if 'NOTEBOOK_PASS' in os.environ:
    c.NotebookApp.password = passwd(os.environ['NOTEBOOK_PASS'])
    del os.environ['NOTEBOOK_PASS']
else:
    c.NotebookApp.token = ''

if 'NOTEBOOK_USER' in os.environ:
    c.NotebookApp.notebook_dir = '/root/' + os.environ['NOTEBOOK_USER']
    del os.environ['NOTEBOOK_USER']

  • Open the jupyer service in the background:
nohup jupyter-notebook --allow-root  > /dev/null  2>&1 &
  • Now you can use the browser to directly access
http://【ip】:16666/lab

Three, configure and check the environment

1. Configure the environment
apt-get update
apt-get install sudo
2. Check whether it is a GPU environment
ipython
import tensorflow as tf
print(tf.__version__)
print(tf.test.is_gpu_available())	# 结果True则是GPU环境

Guess you like

Origin blog.csdn.net/TFATS/article/details/119918502