PyCharm 部署 Docker 容器解释器

在上一篇文章PyCharm 部署 Docker 镜像解释器中,我们详细说明了如何将 PyCharm 的 python 解释器指向 Docker 镜像中,而与模型相关的代码和数据依然保留在主机上。上述环境的配置不适用于需要对容器进行更改的场景,比如通过 PyCharm 安装新的 python 包。本文将说明如何让 PyCharm 指向 Docker 容器(而非镜像),来实现需要经常改动容器的场景。

实验环境:

  • PyCharm:2020.1
  • 服务器:Ubuntu 16.04.4 LTS
  • Docker:19.03.4, build 9013bf583a

配置过程主要包括以下几个步骤:

  1. 在服务器上,使用指定的镜像新建容器,并指定端口映射;
  2. 在服务器上,进入容器,配置 ssh;
  3. 在自己电脑上,配置 PyCharm

1 新建容器并指定映射端口

```shell
# 将 Docker 容器的 22 端口映射到宿主机的 10022 端口上
~]# docker run -it --name tf2_gpu --gpus all -p 1122:22 tensorflow/tensorflow:latest-gpu /bin/bash
________                               _______________                
___  __/__________________________________  ____/__  /________      __
__  /  _  _ \_  __ \_  ___/  __ \_  ___/_  /_   __  /_  __ \_ | /| / /
_  /   /  __/  / / /(__  )/ /_/ /  /   _  __/   _  / / /_/ /_ |/ |/ /
/_/    \___//_/ /_//____/ \____//_/    /_/      /_/  \____/____/|__/

WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u $(id -u):$(id -g) args...
root@ed13c547d9df:/#

2 配置 ssh 服务

root@ed13c547d9df:/# apt update
# 安装 openssh-sever,让本机开放SSH服务
root@ed13c547d9df:/# apt install openssh-server
# 安装文本编辑器
root@ed13c547d9df:/# apt install vim
# 配置 ssh
root@ed13c547d9df:/# vim +/PermitRootLogin /etc/ssh/sshd_config
增加一行:PermitRootLogin yes
# 保存退出,重启 ssh 服务
root@ed13c547d9df:/# /etc/init.d/ssh restart
 * Restarting OpenBSD Secure Shell server sshd                                                     [ OK ]
#设置 root 密码
root@ed13c547d9df:/# passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
回到自己的电脑,测试是否能够 ssh 服务器上的 Docker 容器

```powershell
C:\Users\leaf>ssh [email protected] -p 1122
___  __/__________________________________  ____/__  /________      __
__  /  _  _ \_  __ \_  ___/  __ \_  ___/_  /_   __  /_  __ \_ | /| / /
_  /   /  __/  / / /(__  )/ /_/ /  /   _  __/   _  / / /_/ /_ |/ |/ /
/_/    \___//_/ /_//____/ \____//_/    /_/      /_/  \____/____/|__/

WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u $(id -u):$(id -g) args...
root@ed13c547d9df:~# 

出现如上结果,则表示配置成功。

3 配置 PyCharm

接下来的配置工作与普通的服务器上的配置一样。

(1) 新建一项目,名称为「wander」

图片

(2) 选择 「Existing Interpreter」右边的 「...」,新建 Python 解释器。

图片

输入密码后,
图片

选择 python 解释器,完成配置。

图片

(3)新建「connectivity_test.py] python 文件,内容如下:

import sys
import tensorflow as tf
print(sys.executable)
print('\n', tf.test.is_gpu_available())

(4)设置 Run 配置信息:「Run」 -> [Run」->「Edit Configurations..」

图片

(5)运行结果如下

ssh://[email protected]:1122/usr/bin/python3 -u /tmp/pycharm_project_917/connectivity_test.py
/usr/bin/python3
WARNING:tensorflow:From /tmp/pycharm_project_917/connectivity_test.py:5: is_gpu_available (from tensorflow.python.framework.test_util) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.config.list_physical_devices('GPU')` instead.
2020-05-22 09:48:58.264438: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2020-05-22 09:48:58.279597: I tensorflow/core/platform/profile_utils/cpu_utils.cc:102] CPU Frequency: 1699975000 Hz
2020-05-22 09:48:58.280972: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ffa78000b20 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-05-22 09:48:58.281008: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2020-05-22 09:48:58.286453: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-05-22 09:48:58.631613: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3c5c5e0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-05-22 09:48:58.631658: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2020-05-22 09:48:58.631671: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (1): GeForce GTX 1080 Ti, Compute Capability 6.1
2020-05-22 09:48:58.633921: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties:
pciBusID: 0000:02:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
2020-05-22 09:48:58.636050: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 1 with properties:
pciBusID: 0000:82:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
2020-05-22 09:48:58.636519: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-05-22 09:48:58.640295: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
2020-05-22 09:48:58.643689: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcufft.so.10
2020-05-22 09:48:58.644268: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcurand.so.10
2020-05-22 09:48:58.648118: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusolver.so.10
2020-05-22 09:48:58.650245: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcusparse.so.10
2020-05-22 09:48:58.658191: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudnn.so.7
2020-05-22 09:48:58.665598: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0, 1
2020-05-22 09:48:58.665657: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
2020-05-22 09:48:58.669790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-05-22 09:48:58.669815: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0 1
2020-05-22 09:48:58.669826: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N N
2020-05-22 09:48:58.669834: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 1:   N N
2020-05-22 09:48:58.675256: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/device:GPU:0 with 10371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1)
2020-05-22 09:48:58.677797: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/device:GPU:1 with 10371 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:82:00.0, compute capability: 6.1)
 True
Process finished with exit code 0

从输出结果的第一行可以看到,PyCharm 使用的解释器为 Docker 容器中的。当看到输入结果为 True,则表示可以调用 GPU 了。

参考资料
Ubuntu环境下SSH的安装及使用

更新日志
2020.05.22 完成初稿

猜你喜欢

转载自www.cnblogs.com/offduty/p/12938689.html