How to use the GPU free run deep learning codes

  In depth study researcher knows, deep learning design code requires vast amounts of data, we need (important things to say three times) a lot of very big amount of computation , so that CPU count, however, need help GPU, but this will not mean that CPU performance is not strong GPU, CPU is the kind of integrated, GPU is designed to make image rendering, which we all know, to do more computing GPU image matrix line, should we generally depth learning program to enable the GPU to calculate, also proved that the calculation speed of the GPU than the CPU block, but ( but in front of the words are nonsense ) we are poor, can not afford it, a 1080Ti now should be around 3500, 2080Ti to about 9000, price depends on memory size, so this gave us some benefits --Google free of GPU Colaboratory.

Google Colab Profile

  Google Colaboratory is Google's open a research tool, mainly for research and development of machine learning, this tool can now be free to use , but is not a permanent free is not yet determined, the biggest benefit is to the majority of Google Colab AI developers to provide free the GPU uses! GPU model is Tesla K80, you can easily run in the above example: Keras, Tensorflow, Pytorch and other frameworks.

  Colabortory is a jupyter notebook environment that supports python2 and python3, also including TPU and GPU acceleration, the software integrates with Google Cloud hard disk, users can easily share project or copy other shared items to your account.

Colaboratory use the steps

1, login Google cloud disk

https://drive.google.com/drive/my-drive (no account can register a)

(1), create a new folder, right, as our project folder.

2. Create a file Colab

Right there in more choice google Colaboratry (if not Colaboratory need to associate more application which is associated Colaboratory)

3, started

This time will jump directly to the Colaboratory interface, this interface much like Jupyter Notebook, Jupyter commands as applicable in Colaboratory, it is worth mentioning that, Colab not only can run Python code, as long as a plus in front of the command, "!", This linux command becomes a command, such as we can "! ls" to view the file folder, you can! pip install the library. Py and run the program! Python2 temp.py

Write a piece of code can be tested

Change the working directory, in Colab the cd command is invalid, switch the working directory using chdir function

!pwd # 用 pwd 命令显示工作路径
# /content
!ls # 查看的是 content 文件夹下有哪些文件
# sample_data
!ls "drive/My Drive"
# TensorFlow (这就是我们之前创建的那个文件夹)

# 更改工作目录
import os
os.chdir("/content/drive/My Drive/TensorFlow")
os.getcwd()
# '/content/drive/My Drive/TensorFlow'

重新启动Colab命令:!kill -9 -1

(3)、选择配置环境

  我们大家肯定会疑虑,上述方法跑的那段程序是不是用GPU跑的呢?不是,想要用GPU跑程序我们还需要配置环境,

  点击工具栏“修改”,选择笔记本设置

在运行时类型我们可以选择Python 2或Python 3,硬件加速器我们可以选择GPU或者TPU(后面会讲到),或者None什么都不用。

加载数据

从本地加载数据

从本地上传数据

files.upload 会返回已上传文件的字典。 此字典的键为文件名,值为已上传的数据。

from google.colab import files

uploaded = files.upload()
for fn in uploaded.keys():
  print('用户上传的文件 "{name}" 有 {length} bytes'.format(
      name=fn, length=len(uploaded[fn])))

我们运行该段程序之后,就会让我们选择本地文件,点击上传后,该文件就能被读取了

将文件下载到本地

from google.colab import files

files.download('./example.txt')        # 下载文件

从谷歌云盘加载数据

使用授权代码在运行时装载 Google 云端硬盘

from google.colab import drive
drive.mount('/content/gdrive')

在Colab中运行上述代码,会出现一段链接,点击链接,复制链接中的密钥,输入到Colab中就可以成功把Colab与谷歌云盘相连接,连接后进行路径切换,就可以直接读取谷歌云盘数据了。

向Google Colab添加表单

为了不每次都在代码中更改超参数,您可以简单地将表单添加到Google Colab。

点击之后就会出现左右两个框,我们在左框中输入

# @title 字符串

text = 'value' #@param {type:"string"}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
text_and_dropdown = 'value' #@param ["选项1", "选项2", "选项3"] {allow-input: true}

print(text)
print(dropdown)
print(text_and_dropdown)

双击右边栏可以隐藏代码

Colab中的GPU

首先我们要让Colab连上GPU,导航栏-->编辑-->笔记本设置-->选择GPU

接下来我们来确认可以使用Tensorflow连接到GPU

import tensorflow as tf

device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
  raise SystemError('没有发现GPU device')
print('Found GPU at: {}'.format(device_name))
# Found GPU at: /device:GPU:0

我们可以在Colab上运行以下代码测试GPU和CPU的速度

import tensorflow as tf
import timeit

config = tf.ConfigProto()
config.gpu_options.allow_growth = True

with tf.device('/cpu:0'):
  random_image_cpu = tf.random_normal((100, 100, 100, 3))
  net_cpu = tf.layers.conv2d(random_image_cpu, 32, 7)
  net_cpu = tf.reduce_sum(net_cpu)

with tf.device('/device:GPU:0'):
  random_image_gpu = tf.random_normal((100, 100, 100, 3))
  net_gpu = tf.layers.conv2d(random_image_gpu, 32, 7)
  net_gpu = tf.reduce_sum(net_gpu)

sess = tf.Session(config=config)

# 确保TF可以检测到GPU
try:
  sess.run(tf.global_variables_initializer())
except tf.errors.InvalidArgumentError:
  print(
      '\n\n此错误很可能表示此笔记本未配置为使用GPU。 '
      '通过命令面板(CMD/CTRL-SHIFT-P)或编辑菜单在笔记本设置中更改此设置.\n\n')
  raise

def cpu():
  sess.run(net_cpu)
  
def gpu():
  sess.run(net_gpu)
  
# 运行一次进行测试
cpu()
gpu()

# 多次运行op
print('将100*100*100*3通过滤波器卷积到32*7*7*3(批处理x高度x宽度x通道)大小的图像'
    '计算10次运训时间的总和')
print('CPU (s):')
cpu_time = timeit.timeit('cpu()', number=10, setup="from __main__ import cpu")
print(cpu_time)
print('GPU (s):')
gpu_time = timeit.timeit('gpu()', number=10, setup="from __main__ import gpu")
print(gpu_time)
print('GPU加速超过CPU: {}倍'.format(int(cpu_time/gpu_time)))

sess.close()
# CPU (s):
# 3.593296914000007
# GPU (s):
# 0.1831514239999592
# GPU加速超过CPU: 19倍
View Code

Colab中的TPU

首先我们要让Colab连上GPU,导航栏-->编辑-->笔记本设置-->选择TPU

接下来我们来确认可以使用Tensorflow连接到TPU

import os
import pprint
import tensorflow as tf

if 'COLAB_TPU_ADDR' not in os.environ:
  print('您没有连接到TPU,请完成上述操作')
else:
  tpu_address = 'grpc://' + os.environ['COLAB_TPU_ADDR']
  print ('TPU address is', tpu_address)     
  # TPU address is grpc://10.97.206.146:8470

  with tf.Session(tpu_address) as session:
    devices = session.list_devices()
    
  print('TPU devices:')
  pprint.pprint(devices)

使用TPU进行简单运算

import numpy as np

def add_op(x, y):
  return x + y
  
x = tf.placeholder(tf.float32, [10,])
y = tf.placeholder(tf.float32, [10,])
tpu_ops = tf.contrib.tpu.rewrite(add_op, [x, y])
  
session = tf.Session(tpu_address)
try:
  print('Initializing...')
  session.run(tf.contrib.tpu.initialize_system())
  print('Running ops')
  print(session.run(tpu_ops, {x: np.arange(10), y: np.arange(10)}))
  # [array([ 0.,  2.,  4.,  6.,  8., 10., 12., 14., 16., 18.], dtype=float32)]
finally:
  # 目前,tpu会话必须与关闭会话分开关闭。
  session.run(tf.contrib.tpu.shutdown_system())
  session.close()

在Colab中运行Tensorboard

想要在Google Colab中运行Tensorboard,请运行以下代码

!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip

# 添加TensorBoard的路径
import os
log_dir = 'tb_logs'
if not os.path.exists(log_dir):
  os.makedirs(log_dir)

# 开启ngrok service,绑定port 6006(tensorboard)
get_ipython().system_raw('tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'.format(log_dir))
get_ipython().system_raw('./ngrok http 6006 &')

# 产生网站,点击网站访问tensorboard
!curl -s http://localhost:4040/api/tunnels | python3 -c \
    "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"

您可以使用创建的ngrok.io URL 跟踪Tensorboard日志。您将在输出末尾找到URL。请注意,您的Tensorboard日志将保存到tb_logs目录。当然,您可以更改目录名称。

之后,我们可以看到Tensorboard发挥作用!运行以下代码后,您可以通过ngrok URL跟踪Tensorboard日志。

from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
from keras.callbacks import TensorBoard

batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
    x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
    x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)
else:
    x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
    x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
                 activation='relu',
                 input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
              optimizer=keras.optimizers.Adadelta(),
              metrics=['accuracy'])


tbCallBack = TensorBoard(log_dir=LOG_DIR, 
                         histogram_freq=1,
                         write_graph=True,
                         write_grads=True,
                         batch_size=batch_size,
                         write_images=True)

model.fit(x_train, y_train,
          batch_size=batch_size,
          epochs=epochs,
          verbose=1,
          validation_data=(x_test, y_test),
          callbacks=[tbCallBack])
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
View Code

参考

Colaboratory官方文档

一位外国小哥写的博客,总结的不错

Guess you like

Origin www.cnblogs.com/LXP-Never/p/11614053.html