(A) pytorch study notes

Author: chen_h
Micro Signal & QQ: 862251340
micro-channel public number: coderpai


With Numpy or Torch?

torch claiming to Numpy neural network community, because he can torch tensor generated on the GPU accelerated computing (provided that you have the right GPU), as Numpy array on the CPU will speed up operations. Therefore, if the neural network course, is the form of a data tensor Torch preferably slightly. tensor Tensorflow like among the same.

Of course, we still love the Numpy, because we are so used to form a numpy but see our favorite torch, the torch he did well and numpy compatibility. For example, so you can freely convert numpy array and torch tensor of:

import torch
import numpy as np

np_data = np.arange(6).reshape((2, 3))
torch_data = torch.from_numpy(np_data)
tensor2array = torch_data.numpy()
print(
    '\nnumpy array:', np_data,          # [[0 1 2], [3 4 5]]
    '\ntorch tensor:', torch_data,      #  0  1  2 \n 3  4  5    [torch.LongTensor of size 2x3]
    '\ntensor to array:', tensor2array, # [[0 1 2], [3 4 5]]
)

In math Torch

In fact, the tensor operations and numpy array of exactly the same torch, we are in the form of comparative point of view. If you want to learn more useful torch in other operators,

# abs 绝对值计算
data = [-1, -2, 1, 2]
tensor = torch.FloatTensor(data)  # 转换成32位浮点 tensor
print(
    '\nabs',
    '\nnumpy: ', np.abs(data),          # [1 2 1 2]
    '\ntorch: ', torch.abs(tensor)      # [1 2 1 2]
)

# sin   三角函数 sin
print(
    '\nsin',
    '\nnumpy: ', np.sin(data),      # [-0.84147098 -0.90929743  0.84147098  0.90929743]
    '\ntorch: ', torch.sin(tensor)  # [-0.8415 -0.9093  0.8415  0.9093]
)

# mean  均值
print(
    '\nmean',
    '\nnumpy: ', np.mean(data),         # 0.0
    '\ntorch: ', torch.mean(tensor)     # 0.0
)

Variable Variable

Variable in the Torch is a store location will change the value. The value inside constantly changing, just like a loaded basket of eggs, egg number will continue to change. Who is that eggs inside it, so naturally it is Torch of Tensor slightly. If a Variable calculation, that is the return of the same type of a Variable.

We define a Variable:

import torch
from torch.autograd import Variable # torch 中 Variable 模块

# 先生鸡蛋
tensor = torch.FloatTensor([[1,2],[3,4]])
# 把鸡蛋放到篮子里, requires_grad是参不参与误差反向传播, 要不要计算梯度
variable = Variable(tensor, requires_grad=True)

print(tensor)
"""
 1  2
 3  4
[torch.FloatTensor of size 2x2]
"""

print(variable)
"""
Variable containing:
 1  2
 3  4
[torch.FloatTensor of size 2x2]
"""

Variable Calculation gradient

We then compare tensor calculation and calculation of the variable.

t_out = torch.mean(tensor*tensor)       # x^2
v_out = torch.mean(variable*variable)   # x^2
print(t_out)
print(v_out)    # 7.5

So far, we do not see anything different, but always remember, when Variable calculation, it set up the system in a huge backdrop behind a step by step silently, called the computation graph, computational graph. This figure is used to doing the? originally all the time calculation step (nodes) are connected together, and finally the error reverse transmission, which at one time all the variable amplitude changes (gradients) are calculated, and the tensor it would not have this ability.

v_out = torch.mean(variable*variable) Is a calculated step to add the figure in the calculation, calculation error when there is a reverse pass a credit to him, we'll give you an example:

v_out.backward()    # 模拟 v_out 的误差反向传递

# 下面两步看不懂没关系, 只要知道 Variable 是计算图的一部分, 可以用来传递误差就好.
# v_out = 1/4 * sum(variable*variable) 这是计算图中的 v_out 计算步骤
# 针对于 v_out 的梯度就是, d(v_out)/d(variable) = 1/4*2*variable = variable/2

print(variable.grad)    # 初始 Variable 的梯度
'''
 0.5000  1.0000
 1.5000  2.0000
'''

Variable data acquired inside

Direct print(variable)only output Variable forms of data, in many cases is not take (for example, want to draw with plt), so we have to convert it, it becomes a tensor form.

print(variable)     #  Variable 形式
"""
Variable containing:
 1  2
 3  4
[torch.FloatTensor of size 2x2]
"""

print(variable.data)    # tensor 形式
"""
 1  2
 3  4
[torch.FloatTensor of size 2x2]
"""

print(variable.data.numpy())    # numpy 形式
"""
[[ 1.  2.]
 [ 3.  4.]]
"""

Torch the activation function

Torch function has a lot of motivation, but we usually use to just a few. relu, sigmoid, tanh, softplus. Then we see what they look like their friends.

import torch
import torch.nn.functional as F     # 激励函数都在这
from torch.autograd import Variable

# 做一些假数据来观看图像
x = torch.linspace(-5, 5, 200)  # x data (tensor), shape=(100, 1)
x = Variable(x)

Next generation is to do a different function data of excitation:

x_np = x.data.numpy()   # 换成 numpy array, 出图时用

# 几种常用的 激励函数
y_relu = F.relu(x).data.numpy()
y_sigmoid = F.sigmoid(x).data.numpy()
y_tanh = F.tanh(x).data.numpy()
y_softplus = F.softplus(x).data.numpy()
# y_softmax = F.softmax(x)  softmax 比较特殊, 不能直接显示, 不过他是关于概率的, 用于分类

Then we start drawing, drawing code are the following:

Here Insert Picture Description

import matplotlib.pyplot as plt  # python 的可视化模块, 我有教程 (https://morvanzhou.github.io/tutorials/data-manipulation/plt/)

plt.figure(1, figsize=(8, 6))
plt.subplot(221)
plt.plot(x_np, y_relu, c='red', label='relu')
plt.ylim((-1, 5))
plt.legend(loc='best')

plt.subplot(222)
plt.plot(x_np, y_sigmoid, c='red', label='sigmoid')
plt.ylim((-0.2, 1.2))
plt.legend(loc='best')

plt.subplot(223)
plt.plot(x_np, y_tanh, c='red', label='tanh')
plt.ylim((-1.2, 1.2))
plt.legend(loc='best')

plt.subplot(224)
plt.plot(x_np, y_softplus, c='red', label='softplus')
plt.ylim((-0.2, 6))
plt.legend(loc='best')

plt.show()

link:

https://morvanzhou.github.io/tutorials/machine-learning/torch/

https://pytorch.org/docs/stable/torch.html

https://pytorch.org/docs/stable/torch.html

Published 414 original articles · won praise 168 · views 470 000 +

Guess you like

Origin blog.csdn.net/CoderPai/article/details/104127254