An article with your entry Pytorch

  • This tutorial assumes you have some basic knowledge of the neural network infrastructure,

0. installation Pytorch

1. brief Pytorch

pytorch is a neural network framework that can quickly help us build the neural network

2.Pytorch and Numpy

  • pytorch and numpy almost similar functions

Our data structure in numpy is a high-dimensional array ndarray
in our Pytorch, the equivalent of numpy in ndarray data structure is our Tensor. The following first look at the basic usage Pytorch it!

# 引入torch模块(这就是我们的pytorch)        
import torch   
import numpy    
# 创建一个Tensor
torch_data = torch.Tensor([1,2,3]) 
# Torch ---> array 
np_data = torch_data.numpy()
# array ---> Torch
torch_data = torch.from_numpy(np_array)
# 求绝对值
torch.abs(torch_data)
# 创建一个全为1的矩阵
torch.ones((2,2)) 
  • Grammar Pytorch part, they are the same and numpy

3 activation function

  • No activation function, our neural networks can only ever deal with the problem of linear, nonlinear problem, we need to activate a function, to let our neural network to better handle nonlinear problems.
  • From torch.nn.function references
    many functions can be activated from a reference, but there are also frequently used a small part of the needs of our references from the torch
import torch.nn.functional as F 
import torch
# 引用方法
torch.rule()  # 这个必须要在torch中引用

F.sigmoid()
F.tanh()
F.softplus()
  • Below are a few pictures of the activation function
    Here Insert Picture Description

4. Back Propagation

  • Back Propagation, is an important function in the neural network, let us keep our neurons heavy weights, so that we continue to optimize the neural network
import torch
from torch.autograd import Variable

x = torch.Tensor([[1,2],[3,4]])
# 开启反向传播的功能
x.requires_grad=True

y = torch.mean(x*x)  # v^2
# 进行误差反向传播
y.backward()
# 查看误差
print(x.grad)   

5. build a simple neural network

# 导入我们需要的模块
import torch
import torch.nn.functional as F
import numpy as np
  • Define our neural network
# 定义神经网络
class Net(torch.nn.Module):
	# n_feature 输入的神经元的数目  n_hidden 隐藏层中的神经元数目 n_output输入神经元的数目
    def __init__(self,n_feature,n_hidden,n_output,):
        super(Net,self).__init__()  # 必要步骤 调用父类
        # 定义我们的隐藏层
        self.hidden = torch.nn.Linear(n_feature,n_hidden)
        # 定义我们的输入层
        self.predict = torch.nn.Linear(n_hidden,n_output)
        
    def forward(self,x):
   		# 在我们的隐藏层,和输入层之间 加上我们的激活函数
        x = F.relu(self.hidden(x))
        x = self.predict(x)
        return x
       	
# 一个输入神经元   10个隐藏神经元 1个输出神经元
net = Net(1,10,1)
# 打印这个神经网络
print(net)

Out
Net(
(hidden): Linear(in_features=1, out_features=10, bias=True)
(predict): Linear(in_features=10, out_features=1, bias=True)
)

  • Quick definition of our neural network method
net2 = torch.nn.Sequential(
        torch.nn.Linear(1,10),
        torch.nn.ReLU(),
        torch.nn.Linear(10,1)
)
print(net2) # 和第一个的结果是一样的

6. Optimization with error function

  • Loss function (by calculating the losses, to allow us to optimize our neural networks)
  • Here are all our error function squared error when
loss_func = torch.nn.MSELoss()
# 对于分类的任务  使用交叉熵作为误差函数会更好一些
loss_func = CrossEntropyLoss()
  • Optimizer allows us faster and better optimize our neural network
  • Different optimizer, will have different effects
# 随机梯度下降的优化器
# Net是我们上节的神经网络 # Net.parameters() 是我们的神经网络的参数
net_SGD = Net() 
# 里面的的参数是: 我们神经网络的参数,和学习率,学习率,是我们更新参数的幅度
torch.optim.SGD(net_SGD.parameters(),lr=0.01) 
Big learning rate Small learning rate
convergence speed Fast convergence Slow convergence
Shortcoming Easily the most advantages shock Easy to over-fitting

7. combat a simple classification task

import torch 
import torch.nn.functional as F
import numpy as np

######创建一个假数据######
n_data = torch.ones(100,2)
# 第一个数据集
x0 = torch.normal(2*n_data,1)
y0 = torch.zeros(100)
# 第二个数据集
x1 = torch.normal(-2*n_data,1)
y1 = torch.ones(100)
# 合并数据集  --> 合并 并改变格式
x = torch.cat((x0,x1),0).type(torch.FloatTensor)     # 32位浮点数
y = torch.cat((y0,y1)).type(torch.LongTensor)         # 64 位整型
  • Check out our data
  • This data is only two characteristics (x, y)
  • Tag is 0 or 1, a red data is the first data set, the blue data is the second data set.
    Here Insert Picture Description
######定义我们的神经网络#######
class Net(torch.nn.Module):
	# n_feature 输入的神经元的数目  n_hidden 隐藏层中的神经元数目 n_output输入神经元的数目
    def __init__(self,n_feature,n_hidden,n_output):
    	# 必要步骤 调用父类
        super(Net,self).__init__()
        self.hidden = torch.nn.Linear(n_feature,n_hidden)
        self.predict = torch.nn.Linear(n_hidden,n_output)
    def forward(self,x):
        x = F.relu(self.hidden(x))
        x =self.predict(x)
        return x
######实例化我们的神经网络######
net = Net(2,10,2)
optimizer = torch.optim.SGD(net.parameters(),lr=0.1)
loss_func = torch.nn.CrossEntropyLoss()      # 使用标签误差
######训练我们的神经网络######
for i in range(100):
    prediction = net(x)
    loss = loss_func(prediction,y)
    # 梯度归零
    optimizer.zero_grad()
    # 计算梯度
    loss.backward()
    # 更新结点
    optimizer.step()
    if i % 20 == 0:
        print(loss)

Out
tensor(0.5676, grad_fn=<NllLossBackward>)
tensor(0.0800, grad_fn=<NllLossBackward>)
tensor(0.0339, grad_fn=<NllLossBackward>)
tensor(0.0204, grad_fn=<NllLossBackward>)
tensor(0.0143, grad_fn=<NllLossBackward>)


######作出预测######
x1 = torch.FloatTensor([2,2])  # !!! 必须要转换成Tensor的形式
np.argmax(net(x1).data.numpy)
######保存我们的神经网络######
#保留全部的神经网络
torch.save(net1 ,'net.pkl')
#只保留神经网络的参数
torch.save(net1.state_dict(), 'net_params.pkl' )
# 提取神经网络
net2 = torch.load('.//pkl//net.pkl')
# 用参数还原神经网络 !!首先我们必须创造一个和原来具有一样结构的神经网络
net3 = torch.nn.Sequential(
torch.nn.Linear(2,10),
torch.nn.ReLu(),
torch.nn.Linear(10,2)
)
net3.load_state_dict(torch.load('.//pkl/net_params'))

Congratulations! You have successfully opened the first step of your Pytorch!

Published 31 original articles · won praise 13 · views 9905

Guess you like

Origin blog.csdn.net/qq_43497702/article/details/97006377