Learning diary (3.5)

Part1.optimize (optimized, full utilization) optimization model approach

Today, learning is the main torch in the model library --optim optimization method
based on yesterday's nn library on the code manually update the original model data into packaged optim

#在使用nn的基础上,我们在学一个torch的方法opitmizer,可以帮助我们做model的优化
#因此在这里我们继续沿用昨天的代码(删除手写的模型更新)来完成 使用optimizer优化模型
import torch.nn as nn
import torch
N,D_in,H,D_out=64,1000,100,10
x=torch.randn(N,D_in)
y=torch.randn(N,D_out)
model=torch.nn.Sequential(##模型的组成顺序
#第一层是一个线性结构 这里和前面有点不一样了这里有偏置的
    #y=w1*x+b1
    torch.nn.Linear(D_in,H),
#第二层是一个ReLU激活函数
    torch.nn.ReLU(),
#第三层也是一个线性的结构和第一层一样的
    torch.nn.Linear(H,D_out),
)

#这里使用了nn里的MSELoss来处理loss损失数值,里面的参数reduction
#reduction = ‘none’,直接返回向量形式的 loss
#reduction = ‘sum’,返回loss之和
#reduction = ''elementwise_mean,返回loss的平均值
#reduction = ''mean,返回loss的平均值
loss_fn=nn.MSELoss(reduction='sum')
learning_rate= 1e-4# 1*10^4在torch中学习率一般就选1e-4---1e-5
#使用optimizer来自动更新模型的参数,因此把所有的参数都传进去,加入学习率
optimizer=torch.optim.Adam(model.parameters(),lr=learning_rate)


for it in range(500): 
    y_pred=model(x)
    loss=loss_fn(y_pred,y)
    print("第",it,"轮","损失值:",loss.item())
    #在每次循环中清零 grad避免累加
    optimizer.zero_grad()
    loss.backward()
    #optimizer的执行step更新指令,更新model的每一个parameter
    optimizer.step()
    
   

    

Look at the training model:
Here Insert Picture Description
Here Insert Picture Description

Part2. Pytorch make clear a structure of the model

1.define input and output

Well defined inputs and outputs, easy to get into training data

2.define a torch model

The model presented here only two linear structure, there is no offset, is very simple. Once the network is complicated in fact, do not worry, we have a one initialize each layer structure

3.define a loss function

Loss calculated using nn.MSELoss ()

4.optimize this model

Our optimization model used here is optim.Adam ()

5.train this model by updating the parameters

In this model the neural network training which I took an example,
Forwad Pass: get y_pred forward propagation network predictive value calculated by the x
Compute the loss: (target-y_pred ) seeking to obtain the mean square Loss
Backward Pass: back-propagation network , a chain reaction occurs, the output layer -> hidden layer -> the weights of the input layer (if there is bias signals will adjust the offset time) so that the parameters of the model are trained, Loss decreasing values of the model obtained optimization.

No matter how much a model of long novels code wanted to have a clear structure, but also probably do more than the most basic processes of a neural network in the pytorch, the most basic ideas, lesson learned by today's network received this knowledge, and my heart at ease equation. Next we write as a neural network structure

import torch
import torch.nn as nn
N,D_in,H,D_out=64,1000,100,10


#1.define input and output
x=torch.randn(N,D_in)
y=torch.randn(N,D_out)


#2.define a torch model 
class TwoLayerNet(torch.nn.Module):
    def __init__(self,D_in,H,D_out):
        super(TwoLayerNet,self).__init__()#supper是python里调用父类方法
        #2.define the model atchitecture,
        self.linear1=torch.nn.Linear(D_in,H,bias=False)#这里bias是偏置
        self.linear2=torch.nn.Linear(H,D_out,bias=False)
    def forward(self,x):
        y_pred=self.linear2(self.linear1(x).clamp(min=0))
        return y_pred

    
#2. define a model
model=TwoLayerNet(D_in,H,D_out)


#3.define a loss function
loss_fn=nn.MSELoss(reduction='sum')
learning_rate=1e-4



#4.optimize this model
optimizer=torch.optim.Adam(model.parameters(),lr=learning_rate)
#4.train this model
for it in range(0,500):
#Forward pass
    y_pred=model(x)
#compu the loss
    loss=loss_fn(y_pred,y)
    print("第",it,"轮","损失值:",loss.item())
#Baclward pass
    optimizer.zero_grad()
    loss.backward()
    #update the model parameters
    optimizer.step()
    

        
             

The result: model training qualified
Here Insert Picture Description

Guess you like

Origin www.cnblogs.com/Eldq/p/12423940.html