[Learning Series 4] Realization of Linear Regression

Table of contents

1.1 nn .Module

1.2 Optimization class

1.3 Loss function

1.4 Implementation of Pytorch linear regression 


Assuming that our basic model is y = wx+b, where w and b are parameters, we use y = 3x+0.8 to construct data x, y, so we should be able to conclude that w and b should be close to 3 and b respectively through the model 0.8
1. Prepare the data
2. Calculate the predicted value
3. Calculate the loss, set the gradient of the parameter to 0, and perform backpropagation
4. Update the parameters

import torch
import matplotlib.pyplot as plt
import os

os.environ["KMP_DUPLICATE_LIB_OK"] = "TRUE"

learning_rate = 0.01

# 1. 准备数据
x = torch.rand([500, 1])
y_true = 3 * x + 0.8
# 2. 设计模型
w = torch.rand([1, 1], requires_grad=True)
b = torch.tensor(1, requires_grad=True, dtype=torch.float32)

# 4. 反向传播
for i in range(6000):
    y_predict = torch.matmul(x, w) + b
    # 3. 计算 loss
    loss = (y_true - y_predict).pow(2).mean()
    if w.grad is not None:
        w.grad.data.zero_()
    if b.grad is not None:
        b.grad.data.zero_()

    loss.backward()

    w.data = w.data - learning_rate * w.grad
    b.data = b.data - learning_rate * b.grad

print(b.data, w.data)

plt.figure(figsize=(20, 8))
plt.scatter(x.numpy().reshape(-1), y_true.numpy().reshape(-1))
y_predict = torch.matmul(x, w) + b
plt.plot(x.numpy().reshape(-1), y_predict.detach().numpy().reshape(-1))
plt.show()

 1.1 nn .Module

nn.Modul is a class provided by torch.nn. It is a base class for our custom network in pytorch. Many useful methods are defined in this class. It is very simple for us to inherit this class to define the network
. When defining a network, there are two methods that require special attention:

  1. _init_ needs to call the super method to inherit the properties and methods of the parent class
  2. The farward method must be implemented to define the forward calculation process of our network

Using the previous model of y=wx+b as an example is as follows:

from torch import nn


class Lr(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(1, 1)

    def forward(self, x):
        out = self.linear(x)
        return out

Notice:

1.nn.Linear is a linear model pre-defined by torch, also known as a fully connected layer . The parameters passed in are the number of inputs, and the number of outputs (in_features, out features) does not count (the number of columns in batch_size)

2nn.Modue defines the _call_ method, which is implemented by calling the forward method, that is, the instance of Lr, which can be called directly according to the parameters passed in. In fact, the forward method is called and the parameters are passed in.

#实例化模型
model = Lr()
# 传入数据,计算结果
predict = model(x)

1.2 Optimization class

The optimizer can be understood as the method that torch encapsulates for us to update parameters, such as common stochastic gradient descent (SGD) optimizer
classes are provided by torch.optim, for example
1. torch.optim.SGD (parameters, learning rate)
2.torch.optim.Adam (parameters, learning rate)

Note:
1 Parameters can be obtained by using mode1.parameters0 to obtain all parameters with requires_grad=True in the horizontal type
2. How to use the optimization class
1. Instantiation
2. The gradient of all parameters, set the value to 0
3. Reverse transmission Calculate gradient
4 to update parameter values.
An example is as follows:

optimizer = optim.SGD(model.parameters(), lr=1e-3)  # 1.实例化
optimizer.zero_grad()  # 2,梯度置为0
loss.backward  # 3.计算梯底
optimizer.step()  # 4,更新参教的信

1.3 Loss function

The previous example is a regression problem, and many loss functions are also predicted in torch
1. Mean square error: nn.MSEloss0 common term classification problem
2. Cross-induction loss: nn.crossEntropyLoss(), common term logistic regression
usage method

model = Lr()  # 实例化模型
criterion = nn.MSELoss()  # 2. 实例化损失函数
optimizer = optim.SGD(model.parameters(), lr=1e-3)  # 3. 实例化优化器类
for i in range(100):
    y_predict = model(x)  # 4. 前向计算预测值
    loss = criterion(y, y_predict)  # 5. 调用损失函数传入真实值和预测值,得到损失结果
    optimizer.zero_grad()  # 6. 当前循环参数梯度置为0
    loss.backward()  # 7. 计算梯度
    optimizer.step()  # 8.更新参数的值

1.4 Implementation of Pytorch linear regression 

import torch
from torch import nn, optim
from matplotlib import pyplot as plt

# 1. 定义数据
x = torch.rand([50, 1])
y = x * 3 + 0.8


# 2. 定义模型
class Lr(nn.Module):
    def __init__(self):
        super().__init__()
        self.linear = nn.Linear(1, 1)

    def forward(self, x):
        out = self.linear(x)
        return out


# 3. 实例化模型 loss 优化器
model = Lr()
criterion = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=1e-2)

# 4. 训练模型
for i in range(20000):
    out = model(x)
    loss = criterion(y, out)
    optimizer.zero_grad()
    loss.backward()

    optimizer.step()
    if (i + 1) % 20 == 0:
        print(
            f'epoch {i} , loss {loss.data:.3f}, {list(model.parameters())[0].item()}, {list(model.parameters())[1].item()}')

# 5. 模型评估
model.eval()  # 设置模型为评估模式,即预测模式
predict = model(x)
predict = predict.data.numpy()
plt.scatter(x.data.numpy(), y.data.numpy(), c='r')
plt.plot(x.data.numpy(), predict)
plt.show()

Guess you like

Origin blog.csdn.net/WakingStone/article/details/129642168