PyTorch learning neural network

PyTorch learning neural network

Learning Website

http://pytorch123.com/SecondSection/neural_networks/


Neural Networks

Neural networks may be constructed by torch.nn package, which is to build some models based on an automatic gradient, and a layer comprising a nn.Module methods forward (input), but also returns an output (output).
Below is a simple feedforward neural network LeNet, a simple neural network includes the following points:

  • It contains the definition of a trainable neural network parameters
  • Iteration entire input
  • Processing the input by the neural network
  • Calculation of the loss function value
  • Backpropagation gradient parameter to a neural network
  • Update parameters of the network, typically a simple update method: weight = weight - learning_rate * gradient ·

Use torch to write the above neural network

# -*- coding: UTF-8 -*-
"""
Modify: 2019-12-14
"""
import torch
import torch.nn as nn
import torch.nn.functional as F

class Net(nn.Module):
    def __init__(self):
        # super(Net,self) 首先找到 Net 的父类(就是类 nn.Module),
        # 然后把类 Net 的对象转换为类 FooParent 的对象
        super(Net, self).__init__()
        #构建卷积
        self.conv1 = nn.Conv2d(1, 6, 5)  # 输入 1, 输出 6, 卷积 5*5,即6个不同的5*5卷积
        self.conv2 = nn.Conv2d(6, 16, 5) # 输入 6, 输出 16, 卷积 5*5
        #构建全连接 y = wx + b
        self.fc1 = nn.Linear(16*5*5, 120) # 因为有16张 5*5 map, 所以有 16*5*5 = 400个输入,因此有120个400*1的向量
        self.fc2 = nn.Linear(120, 84) # 84 个 120 * 1 个向量
        self.fc3 = nn.Linear(84, 10)  # 10个 84 * 1 个向量

    def forward(self, x):
        x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2)) # 2*2 的最大值池化, relu为激活函数
        x = F.max_pool2d(F.relu(self.conv2(x)), 2) # 如果池化窗口是矩形的,则可以只设定为2
        x = x.view(-1, self.num_flat_features(x)) # view函数将张量x变形成一维的向量形式,作为全连接的输入
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

    def num_flat_features(self, x):
        size = x.size()[1:]     # 单张图像的size
        num_features = 1
        for s in size:
            num_features *= s
        return num_features

net = Net()
print(net)
# 输出
# Net(
#   (conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
#   (conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
#   (fc1): Linear(in_features=400, out_features=120, bias=True)
#   (fc2): Linear(in_features=120, out_features=84, bias=True)
#   (fc3): Linear(in_features=84, out_features=10, bias=True)
# )

# 一个模型的训练参数可以通过调用 net.parameterss()返回
params = list(net.parameters())
print(len(params))
print(params[0].size())
# 输出
# 10
# torch.Size([6, 1, 5, 5])

#下面尝试随机生成一个32*32的输入。注意:期望的输入维度是32*32
# 为了使用这个网络在MNIST数据上使用,需要把数据集中的图像维度修改为32*32
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)
#输出
# tensor([[ 0.0469,  0.0975,  0.0686,  0.0793,  0.0673,  0.0325, -0.0455, -0.0428,
# #          -0.0671, -0.0067]], grad_fn=<AddmmBackward>)

#把所有参数梯度缓存器置0, 用随机的梯度来反向传播
net.zero_grad()
out.backward(torch.randn(1, 10))

The above code defines a nerve mulberry off, and the process input and call back propagation.

The following start, and calculates loss values ​​of the network updated weights:

  1. Loss function: a pair of input loss function requires: model output and the target, and calculates a distance to the target output value to evaluate how far. nn package contains a number of different functions loss, a loss function that is simple nn.MSELoss, i.e., mean square error.
output = net(input)
target = torch.randn(10)     # 目标值
target = target.view(1, -1)  # 将其变换成与输出值相同的尺寸格式
criterion = nn.MSELoss()     # MSELoss
loss = criterion(output, target)  #计算输出与目标之间的损失函数
print(loss)
#输出
#tensor(0.3130, grad_fn=<MseLossBackward>)
  1. Back Propagation: In order to reverse the spread of losses, need to do is use loss.backward (). It must first clear the existing gradient, otherwise the existing gradient and accumulated together
net.zero_grad()
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)

loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)

#输出
# conv1.bias.grad before backward
# tensor([0., 0., 0., 0., 0., 0.])
# conv1.bias.grad after backward
# tensor([-0.0040, -0.0041,  0.0244, -0.0020, -0.0054, -0.0084])
  1. Neural network parameter updates
    the easiest way is to update computation using stochastic gradient descent weight = weight - learning_rate * gradient
    Python implementations
learning_rate = 0.01
for f in net.parameters():
    f.data.sub_(f.grad.data * learning_rate)

torch.optim built a variety of weight update rules, similar to SGD, Nesterov-SGD, Adam, RMSProp and so on.
Used as follows:

import torch.optim as optim
# 创建优化器
optimizer = optim.SGD(net.parameters(), lr=0.01)

# 在训练的循环中加入一下代码
optimizer.zero_grad()   # 将梯度置0
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step()    # 更新权重
Published 38 original articles · won praise 29 · views 50000 +

Guess you like

Origin blog.csdn.net/ruotianxia/article/details/103532073