pytorch学习2:线性回归

点击打开链接

线性回归

对于线性回归,相信大家都很熟悉了,各种机器学习的书第一个要讲的内容必定有线性回归,这里简单的回顾一下什么是简单的一元线性回归。即给出一系列的点,找一条直线,使得这条直线与这些点的距离之和最小。


import torch
from torch import nn, optim
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt

x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],
                    [9.779], [6.182], [7.59], [2.167], [7.042],
                    [10.791], [5.313], [7.997], [3.1]], dtype=np.float32)

y_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],
                    [3.366], [2.596], [2.53], [1.221], [2.827],
                    [3.465], [1.65], [2.904], [1.3]], dtype=np.float32)

还记得pytorch里面的基本处理单元吗?Tensor,我们需要将numpy转换成Tensor,如果你还记得上一节的内容,那么你就一定记得这个函数,torch.from_numpy()

x_train = torch.from_numpy(x_train)

y_train = torch.from_numpy(y_train)

这样数据就转化成tensor

下面开始定义模型

# Linear Regression Model
class LinearRegression(nn.Module):
    def __init__(self):
        super(LinearRegression, self).__init__()
        self.linear = nn.Linear(1, 1)  # input and output is 1 dimension

    def forward(self, x):
        out = self.linear(x)
        return out

然后需要定义loss和optimizer,就是误差和优化函数

model = LinearRegression()
# 定义loss和优化函数
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=1e-4)

这里使用的是最小二乘loss,之后我们做分类问题更多的使用的是cross entropy loss,交叉熵。优化函数使用的是随机梯度下降,注意需要将model的参数model.parameters()传进去让这个函数知道他要优化的参数是那些。

开始训练

 开始训练
num_epochs = 1000
for epoch in range(num_epochs):
    inputs = Variable(x_train)
    target = Variable(y_train)

    # forward
    out = model(inputs)#前向传播
    loss = criterion(out, target)#计算LOS
    # backward
    optimizer.zero_grad() #梯度归零
    loss.backward()      #反向传播
    optimizer.step()      #更新参数

    if (epoch+1) % 20 == 0:
        print('Epoch[{}/{}], loss: {:.6f}'
              .format(epoch+1, num_epochs, loss.data[0]))

模型测试

model.eval()
predict = model(Variable(x_train))
predict = predict.data.numpy()
plt.plot(x_train.numpy(), y_train.numpy(), 'ro', label='Original data')
plt.plot(x_train.numpy(), predict, label='Fitting Line')
# 显示图例
plt.legend() 
plt.show()

# 保存模型
torch.save(model.state_dict(), './linear.pth')

特别注意的是需要用 model.eval(),让model变成测试模式,这主要是对dropout和batch normalization的操作在训练和测试的时候是不一样的。



Epoch[20/1000], loss: 0.555306
Epoch[40/1000], loss: 0.517519
Epoch[60/1000], loss: 0.490737
Epoch[80/1000], loss: 0.471731
Epoch[100/1000], loss: 0.458221
Epoch[120/1000], loss: 0.448595
Epoch[140/1000], loss: 0.441715
Epoch[160/1000], loss: 0.436776
Epoch[180/1000], loss: 0.433208
Epoch[200/1000], loss: 0.430609
Epoch[220/1000], loss: 0.428696
Epoch[240/1000], loss: 0.427267
Epoch[260/1000], loss: 0.426180
Epoch[280/1000], loss: 0.425335
Epoch[300/1000], loss: 0.424661
Epoch[320/1000], loss: 0.424109
Epoch[340/1000], loss: 0.423642
Epoch[360/1000], loss: 0.423235
Epoch[380/1000], loss: 0.422872
Epoch[400/1000], loss: 0.422539
Epoch[420/1000], loss: 0.422227
Epoch[440/1000], loss: 0.421931
Epoch[460/1000], loss: 0.421646
Epoch[480/1000], loss: 0.421368
Epoch[500/1000], loss: 0.421096
Epoch[520/1000], loss: 0.420828
Epoch[540/1000], loss: 0.420563
Epoch[560/1000], loss: 0.420300
Epoch[580/1000], loss: 0.420039
Epoch[600/1000], loss: 0.419779
Epoch[620/1000], loss: 0.419520
Epoch[640/1000], loss: 0.419261
Epoch[660/1000], loss: 0.419003
Epoch[680/1000], loss: 0.418746
Epoch[700/1000], loss: 0.418489
Epoch[720/1000], loss: 0.418233
Epoch[740/1000], loss: 0.417977
Epoch[760/1000], loss: 0.417720
Epoch[780/1000], loss: 0.417465
Epoch[800/1000], loss: 0.417210
Epoch[820/1000], loss: 0.416955
Epoch[840/1000], loss: 0.416700
Epoch[860/1000], loss: 0.416446
Epoch[880/1000], loss: 0.416191
Epoch[900/1000], loss: 0.415937
Epoch[920/1000], loss: 0.415684
Epoch[940/1000], loss: 0.415430
Epoch[960/1000], loss: 0.415177
Epoch[980/1000], loss: 0.414924
Epoch[1000/1000], loss: 0.414672

猜你喜欢

转载自blog.csdn.net/yuyangyg/article/details/80046245