pytorch learning small note

pytorch

GitHub address code for this article

Due

For some reason pytorch need to use as a basis for further work, so be prepared to spend about QuickStart week, the familiar pytorch build neural networks, as well as literature reading and writing, project application, as well as the need to promote research in the other direction, probably every day spend two hours learning, simply record it.
This article Tutorial:
pytorch official Chinese version of the tutorial
pytorch official documents

Equipment: Dell Inspiron 7580, cpu Intel 8265u, discrete graphics MX150, OS: Windows10, the main use editor: vscode

The reason is deployed on linux virtual machine is not selected, on the one hand I am anxious to catch virtual machine performance, mounted on a mechanical hard disk (tragic history between 512ssd and 128ssd + 1T must choose the former), on the other hand is the tutorial is the windows.

day1 installation pytorch

According to the tutorial instructions, install the anaconda, then go to the official website pytorch then open cmd according to the following command, copy and paste Enter one go: Here Insert Picture Descriptionthis process is a bit longer, please be patient.
Next, create a new file in python VSCODE, write the following code to check if the installation was successful:

    from  __future__  import print_function
    import torch
    x = torch.rand(5,3)
    print(x)

Press f5, and the result is as follows:
Here Insert Picture DescriptionAt this point, Anaconda and pytorch installation is complete.
Of course, do not forget easily github this visit, to form good habits
Here Insert Picture Description

day2

Today is the main learning

    #get start:
    #Tensor
    from  __future__  import print_function
    import torch
    x = torch.empty(5,3)
    print(x)
    x = torch.rand(5,3)
    print(x)
    x = torch.zeros(5,3,dtype=torch.long)
    print(x)
    x = torch.tensor([5.5,3])
    print(x)
    x = x.new_ones(5,3,dtype=torch.double)
    print(x)
    x = torch.randn_like(x,dtype=float)
    print(x)
    print(x.size())
    #########add###########
    y = torch.rand(5,3)
    #1
    print(x+y)
    #2
    print(torch.add(x,y))
    #3
    result = torch.empty(5,3)
    torch.add(x,y,out=result)
    print(result)
    #4 add x to y
    y.add_(x)
    print(y)
    #search
    print(x[:,1])
    #change size
    x = torch.randn(4,4)
    y = x.view(16)
    z = x.view(-1,8) #the size -1 is inferred from other dimensions
    print(x.size(),y.size(),z.size())
    print(x)
    print(y)
    print(z)
    #use .item() to get the value
    x = torch.randn(1)
    print(x)
    print(x.item())
  • autograd package, here it comes to some of the gradient tensor concept, can refer to this article .
# autograd 软件包为 Tensors 上的所有操作提供自动微分。
# 它是一个由运行定义的框架,这意味着以代码运行方式定义你的后向传播,并且每次迭代都可以不同。
import torch
# 创建张量x,设置requires_grad = True来跟踪相关计算
x = torch.ones(2,2,requires_grad=True)
print(x)
# 对张量做一个操作
y = x+2
print(y)
# y作为操作结果被创建,所以由有grad_fn
print(y.grad_fn)
# 针对y做更多操作
z = y*y*3
out = z.mean()
print(z,out)
# .requires_grad_( ... ) 会改变张量的 requires_grad 标记。如果没有提供相应参数,输入的标记默认为 False。
a = torch.randn(2,2)
a = ((a*3)/(a-1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a*a).sum()
print(b.grad_fn)

#############梯度##########
out.backward()
print(x.grad)
###########雅可比向量积########
x = torch.randn(3,requires_grad=True)
y = x*2
while y.data.norm() <  1000
y = y*2
print(y)
# y不再是个标量,torch.autograd不能直接计算整个雅可比,
# 但是如果我们只想要雅可比向量积,只需要简单的传递向量给 backward 作为参数。
v = torch.tensor([0.1,1.0,0.0001],dtype=torch.float)
y.backward(v)
print(x.grad)
# 可以通过将代码包裹在 with torch.no_grad(),来停止对从跟踪历史中 的 .requires_grad=True 的张量自动求导
print(x.requires_grad)
print((x **  2).requires_grad)
with torch.no_grad():
print((x **  2).requires_grad)

day3

Today neural network learning pytorch

A typical neural network training process include the following:
1. Define comprises a neural network training parameter
2. iterates through the input
3. The input processing by the neural network
4. The calculation of the loss (Loss)
5. The gradient back propagation neural network parameters
parameter 6. update the network, typically with a simple update method: weight = weight - learning_rate * gradient

Definition of neural networks:

## pytorch神经网络
import torch
import torch.nn as nn
import torch.nn.functional as F

class  Net(nn.Module):
	def  __init__(self):
		super(Net,self).__init__()
		# 1 input image channel, 6 output channels, 5*5 square convolution
		# kernel
		self.conv1 = nn.Conv2d(1,6,5)
		self.conv2 = nn.Conv2d(6,16,5)
		# an affine operation: y = Wx+b
		self.fc1 = nn.Linear(16*5*5,120)
		self.fc2 = nn.Linear(120,84)
		self.fc3 = nn.Linear(84,10)

	def  forward(self,x):
		# Max polling over a (2,2) window
		x = F.max_unpool2d(F.relu(self.conv1(x)),(2,2))
		# If the size is a square you can only specify a single number
		x = F.max_unpool2d(F.relu(self.conv2(x)),2)
		x = x.view(-1,self.num_flat_features(x))
		x = F.relu(self.fc1(x))
		x = F.relu(self.fc2(x))
		x =  self.fc3(x)
		return x

	def  num_flat_features(self,x):
		size = x.size()[1:]
		num_features =  1
		for s in size:
		num_features *= s
		return num_features

net = Net()
print(net)

Iterates through the input and input by the neural network processing

# 一个模型可训练的参数可以通过net.parameters()返回
params =  list(net.parameters())
print(len(params))
print(params[0].size()) #conv1's .weight
# 尝试32*32的随机输入
input  = torch.randn(1,1,32,32)
out = net(input)
print(out)
# 把所有参数梯度缓存器置零,用随机的梯度来反
net.zero_grad()
out.backward(torch.randn(1,10))

Loss function

## 损失函数
output = net(input)
target = torch.randn(10) #a dummy target, for example
target = target.view(1,-1) # make it the same shape as output
criterion = nn.MSELoss()
loss = criterion(output,target)
print(loss)

Back Propagation

# 演示反向传播
print(loss.grad_fn) #MSELoss
print(loss.grad_fn.next_functions[0][0]) #Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) #ReLU
# 反向传播
# 为了实现反向传播损失,我们所有需要做的事情仅仅是使用loss.backward()
net.zero_grad() # zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)

Renewal of neural network parameters

#更新神经网络参数:
# 最简单的更新规则就是随机梯度下降。
# weight = weight - learning_rate * gradient
# 我们可以使用 python 来实现这个规则:
learning_rate =  0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)

#尽管如此,如果你是用神经网络,你想使用不同的更新规则,类似于 SGD, Nesterov-SGD, Adam, RMSProp, 等。为了让这可行,我们建立了一个小包:torch.optim 实现了所有的方法。使用它非常的简单。

import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)
# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update

Day4

About BP neural network
Here Insert Picture Description for some reason, decided to turn tensorflow, the pytorch be testing the waters AI field.

Released three original articles · won praise 0 · Views 63

Guess you like

Origin blog.csdn.net/qq_43534805/article/details/105184930
Recommended