Google Colab free GPU server using the tutorial Mount Drive

I. Introduction
two, Google Colab features
three start using
3.1 created on the Google cloud disk folders
3.2 create Colaboratory
3.3 created
four settings GPU run
Fifth, run .py file
5.1 to install the necessary libraries
5.2 Mount Drive
5.3 installation Keras
5.4 Hello Mnist!
I. Introduction
I do not know whether to look for free GPU server bruised and battered.
In recent days, Google launched Google Colab (Colaboratory)
description of its official are:

Colaboratory is a research project, free of charge.

Draw the focus, the most important feature is free GPU! Free GPU! Free GPU!
Although not sure this project is not permanent
but tangled in doubt whether to spend a lot of money to hire a personal investigator GPU server research brought heavy welfare!
After the hands-on access to information, especially the tutorials written blog for everyone to share.
Due to the limited capacity of bloggers level, it is inevitable mistakes, please correct me ha!

2018.3.22 update
emmm, probably with more people ...
to run a DCGAN even greater than with the CPU running on their own laptops but also five times slower on colab ...
there is no free lunch ...

Two, Google Colab feature
Colaboratory Google is a research project designed to help the spread of machine learning and training and research. It is a Jupyter notebook environment, does not require any settings you can use, and runs entirely in the cloud.
Colaboratory notebook stored in Google Drive, and can be shared, just as you use the same Google document or spreadsheet. Colaboratory free.
Use Colaboratory, you can easily use Keras, TensorFlow, PyTorch and other frameworks to develop deep learning applications.
Third, started
Note: The use google services may require a ladder

3.1 create files on Google cloud disk folder
when login account into the Google cloud disk, the system will give 15G free space. Because Colab need to rely on Google cloud disk, you need to create a new folder on your cloud drive.

Choose New Folder, the folder name can be customized.

3.2 Creating Colaboratory
enter the created folder, opening a new - more.

If the column is not found in more Colaboratory, select Connect more apps, search Colaboratory, select association.


3.3 created
after created, it will automatically generate a jupyter notebook is not very familiar with -


Fourth, set the GPU running
Select Edit - Notebook Set


The hardware accelerator can be set to GPU


6.3.5 Operating file .py
5.1 install the necessary library
enter the appropriate code and execution (crtl + F9)

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse
from google.colab import auth
auth.authenticate_user()
from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}
1
2
3
. 4
. 5
. 6
. 7
. 8
. 9
10
. 11
12 is
run, the following prompt


Before opening the appropriate link, choose your Google account, and allows, finally will get the corresponding code, you can enter the appropriate box

Mount Drive 5.2
above, enter the following command to execute

! mkdir -p Drive
! Google-Drive Drive -o-ocamlfuse NONEMPTY
. 1
2
5.3 installation Keras
Similarly, input commands

! PIP install -q keras
1
5.4 MNIST the Hello!
Jupyter code stick into the notebook, run to start the Google Colab wonderful trip
excerpt: https: //github.com/keras-team/keras/blob/master/examples/mnist_cnn.py

'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''

from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K

batch_size = 128
num_classes = 10
epochs = 12

# input image dimensions
img_rows, img_cols = 28, 28

# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()

if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)

x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')

# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)

model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])

model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69

Each epoch only took more than ten seconds!
Is not it interesting too!

The References
https://medium.com/deep-learning-turkey/google-colab-free-gpu-tutorial-e113627b9f5d
---------------------
Author: cocoaqin
source: CSDN
original: https: //blog.csdn.net/cocoaqin/article/details/79184540
copyright: This article is a blogger original article, reproduced, please attach Bowen link!

Guess you like

Origin www.cnblogs.com/jfdwd/p/11228820.html