pytorch learn 2: linear regression

Click to open the link

Linear regression

For linear regression, I believe everyone is familiar with it. The first content of various machine learning books must be linear regression. Here is a brief review of what simple univariate linear regression is. That is, given a series of points, find a straight line such that the sum of the distances between this straight line and these points is the smallest.


import torch
from torch import nn, optim
from torch.autograd import Variable
import numpy as np
import matplotlib.pyplot as plt

x_train = np.array([[3.3], [4.4], [5.5], [6.71], [6.93], [4.168],
                    [9.779], [6.182], [7.59], [2.167], [7.042],
                    [10.791], [5.313], [7.997], [3.1]], dtype = e.g. float32)

y_train = np.array([[1.7], [2.76], [2.09], [3.19], [1.694], [1.573],
                    [3.366], [2.596], [2.53], [1.221], [2.827],
                    [3.465], [1.65], [2.904], [1.3]], dtype = e.g. float32)

Remember the basic processing unit inside pytorch? Tensor, we need to convert numpy to Tensor, if you remember the content of the previous section, then you must remember this function, torch.from_numpy()

x_train = torch.from_numpy(x_train)

y_train = torch.from_numpy(y_train)

In this way, the data is converted into tensor

Let's start defining the model

# Linear Regression Model
class LinearRegression(nn.Module):
    def __init__(self):
        super(LinearRegression, self).__init__()
        self.linear = nn.Linear(1, 1)  # input and output is 1 dimension

    def forward(self, x):
        out = self.linear(x)
        return out

Then you need to define loss and optimizer, which are error and optimization functions

model = LinearRegression()
# Define loss and optimization functions
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=1e-4)

The least squares loss is used here, and then we use cross entropy loss more for classification problems, cross entropy. The optimization function uses stochastic gradient descent. Note that you need to pass the model's parameters model.parameters() to let this function know which parameters it wants to optimize.

start training

start training
num_epochs = 1000
for epoch in range(num_epochs):
    inputs = Variable(x_train)
    target = Variable(y_train)

    # forward
    out = model(inputs) #forward propagation
    loss = criterion(out, target)#计算LOS
    # backward
    optimizer.zero_grad() #gradient return to zero
    loss.backward() #Backward propagation
    optimizer.step() #Update parameters

    if (epoch+1) % 20 == 0:
        print('Epoch[{}/{}], loss: {:.6f}'
              .format(epoch+1, num_epochs, loss.data[0]))

Model testing

model.eval()
predict = model(Variable(x_train))
predict = predict.data.numpy()
plt.plot(x_train.numpy(), y_train.numpy(), 'ro', label='Original data')
plt.plot(x_train.numpy(), predict, label='Fitting Line')
# show legend
plt.legend()
plt.show()

# save the model
torch.save(model.state_dict(), './linear.pth')

It is important to note that you need to use model.eval() to turn the model into test mode. This is mainly because the operations of dropout and batch normalization are different during training and testing.



Epoch[20/1000], loss: 0.555306
Epoch[40/1000], loss: 0.517519
Epoch[60/1000], loss: 0.490737
Epoch[80/1000], loss: 0.471731
Epoch[100/1000], loss: 0.458221
Epoch[120/1000], loss: 0.448595
Epoch[140/1000], loss: 0.441715
Epoch[160/1000], loss: 0.436776
Epoch[180/1000], loss: 0.433208
Epoch[200/1000], loss: 0.430609
Epoch[220/1000], loss: 0.428696
Epoch[240/1000], loss: 0.427267
Epoch[260/1000], loss: 0.426180
Epoch[280/1000], loss: 0.425335
Epoch[300/1000], loss: 0.424661
Epoch[320/1000], loss: 0.424109
Epoch[340/1000], loss: 0.423642
Epoch[360/1000], loss: 0.423235
Epoch[380/1000], loss: 0.422872
Epoch[400/1000], loss: 0.422539
Epoch[420/1000], loss: 0.422227
Epoch[440/1000], loss: 0.421931
Epoch[460/1000], loss: 0.421646
Epoch[480/1000], loss: 0.421368
Epoch[500/1000], loss: 0.421096
Epoch[520/1000], loss: 0.420828
Epoch[540/1000], loss: 0.420563
Epoch[560/1000], loss: 0.420300
Epoch[580/1000], loss: 0.420039
Epoch[600/1000], loss: 0.419779
Epoch[620/1000], loss: 0.419520
Epoch[640/1000], loss: 0.419261
Epoch[660/1000], loss: 0.419003
Epoch[680/1000], loss: 0.418746
Epoch[700/1000], loss: 0.418489
Epoch[720/1000], loss: 0.418233
Epoch[740/1000], loss: 0.417977
Epoch[760/1000], loss: 0.417720
Epoch[780/1000], loss: 0.417465
Epoch[800/1000], loss: 0.417210
Epoch[820/1000], loss: 0.416955
Epoch[840/1000], loss: 0.416700
Epoch[860/1000], loss: 0.416446
Epoch[880/1000], loss: 0.416191
Epoch[900/1000], loss: 0.415937
Epoch[920/1000], loss: 0.415684
Epoch[940/1000], loss: 0.415430
Epoch[960/1000], loss: 0.415177
Epoch[980/1000], loss: 0.414924
Epoch[1000/1000], loss: 0.414672

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324849340&siteId=291194637