Pytorch implements linear regression - Pytorch study notes 1

Pytorch implements linear regression

The content recorded in this article is the notes made by watching the relevant pytorch teaching video of Master Liu at station B.
Video link: Pytorch deep learning practice

1. Python knowledge points

1. Class inheritance

When a class is inherited, it needs to be initialized like this:

ModuleIs the parent class LinearModelfrom which the class inherits

class LinearModel(torch.nn.Module):
    
    def __init__(self):
        super(LinearModel, self).__init__()
super(LinearModel, self).——init()——

super(LinearModel, self).init()First find the parent class of LinearModel (here is a class Module), then convert the object of the LinearModel class selfto the object of the Module class, and then call the function of the "converted" class Module object init, in fact, the simple understanding is that the subclass converts the parent class init() is placed in its own init(), so that the subclass has init()those things of the parent class.

Looking at the above code again, LinearModelclass inheritance nn.Moduleis to initialize the properties super(LinearModel, self).init()inherited from the parent class , and use the initialization method to initialize the inherited properties.nn.Modulenn.Module

class LinearModel(nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        # 输入图像channel:1;输出channel:6;5x5卷积核
        self.conv1 = nn.Conv2d(1, 6, 5)

That is to say, the subclass inherits all the properties and methods of the parent class, and the properties of the parent class will naturally be initialized with the methods of the parent class. Of course, if the logic of initialization is different from that of the parent class, it is also possible to re-initialize by yourself without using the method of the parent class. for example:

class DiabetesDataset(Dataset):
    def __init__(self, filepath):
        xy = np.loadtxt(filepath, delimiter=',', dtype=np.float32)
        #例如xy矩阵式Nx9的,那么其shape即为(N,9),所以shape[0]指取出N,得知总共的样本数量
        self.len = xy.shape[0]
        self.x_data = torch.from_numpy(xy[:, :-1])
        self.y_data = torch.from_numpy(xy[:, [-1]])

2. Callable objects

The linear here is the instantiated object of the Linear class, but the object can be called directly below

self.linear = torch.nn.Linear(1, 1)#类的对象实例化
y_pred = linear(x)#该实例化的对象可以调用

To achieve this, you need to do the following when creating a class. When defining a class, you must define a call function, so that you can directly call the object:

class FooBar():
    
    def __init__(self):
        pass
    
    def __call__(self, *args, **kwargs):
        
        print("Hello" + str(args[0]))
 
foobar = FooBar()
foobar(1,2,3)

turn out:

Hello1

Because here args is a tuple, the 0th element of the tuple is the number 1

3. Function parameters: *arg and **kwargs

*argHere is a supplementary explanation about the parameter sum when the function is defined **kwargs:

def test(*args,**kwargs):
    
    print(args)
    print(kwargs)
    
test(1,2,3,x=5,y=6)

turn out:

(1, 2, 3)
{
    
    'x': 5, 'y': 6}

That is,args a tuple is created for all the unnamed parameters passed in, and kwargsa dictionary is created for all the named variables passed in, but it should be noted that the parameters passed in must follow the unnamed numbers first, and the named variables All in the end, if not, an error will be reported.

Two, Pytorch related functions

1. torch.nn.Module class

insert image description here

2. Linear() function

3.torch.nn.MSELoss() function

insert image description here

4. torch.optim.SGD() function

insert image description here

5.Module.parameters() function

3. Detailed explanation of the code in this section

1. Create a dataset

The first is to create a data set, where the data set is created by Pytorch's Tensor

#创建数据集
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])

2. Define the model

The definition of the model is inherited from the class torch.nnin (Neural Network) in PytorchModule

"""模型必须继承自Module"""
class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        #构造一个对象,Linear里面分别是input_feature(输入样本的维度),output_feature,和bias(True默认 or False)
        self.linear = torch.nn.Linear(1, 1)

    def forward(self, x):
        y_pred = self.linear(x)
        return y_pred

#定义模型
model = LinearModel()

Use this class to create a linear model, that is, Y=w*x+b. Note that the instantiation result linear here can be called back (callable). For details, see 1. Python knowledge points

 #构造一个对象,Linear里面分别是input_feature(输入样本的维度),output_feature,和bias(True默认 or False)
self.linear = torch.nn.Linear(1, 1)

The Linear function used here, the samples used in this linear regression, the input of each sample is 1-dimensional, and so is the output.

Notice:

Here, Linear() will automatically create weights and bias of Tensor type, and there is no need to consider the problem of initialization assignment

3. Define the loss function and optimizer

#构造损失函数
criterion = torch.nn.MSELoss(size_average=False)

Here, with the help of related modules in Pytorch, a function of MSE loss is constructed, size_average=Falsewhich means that multiple losses are summed and not averaged.

#构造优化器
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)

Here, a gradient descent optimization module is built with the help of related modules in Pytorch. The model class inherits from the class torch.nn.Module. The function used here model.parameters()is the function of the parent class. The function of this function is to automatically find all the members of the model class and find the required gradient. Descent the optimized tensor, and then SGD automatically performs gradient descent optimization.

4. Model training

#训练模型
for epoch in range(1000):
    #这里model会自动调用forward()函数,因为是model父类的特性
    y_pred = model(x_data)
    loss = criterion(y_pred, y_data)
    print(epoch, loss.item())

    #所有梯度归零化
    optimizer.zero_grad()
    #反向传播求出梯度
    loss.backward()
    #更新权重和偏置值,即w和b
    optimizer.step()

4. Complete code

"""
Pytorch实现线性回归,向量化
"""
import torch
import visdom

#创建数据集
x_data = torch.Tensor([[1.0], [2.0], [3.0]])
y_data = torch.Tensor([[2.0], [4.0], [6.0]])

"""模型必须继承自Module"""
class LinearModel(torch.nn.Module):
    def __init__(self):
        super(LinearModel, self).__init__()
        #构造一个对象,Linear里面分别是input_feature(输入样本的维度),output_feature,和bias(True默认 or False)
        self.linear = torch.nn.Linear(1, 1)

    def forward(self, x):
        y_pred = self.linear(x)
        return y_pred

#定义模型
model = LinearModel()

#构造损失函数
criterion = torch.nn.MSELoss(size_average=False)

#构造优化器
optimizer = torch.optim.SGD(model.parameters(), lr=0.01)


vis = visdom.Visdom(env='main')  # 设置环境窗口的名称,如果不设置名称就默认为main
opt = {
    
    
        'xlabel': 'epochs',
        'ylabel': 'loss_value',
        'title': 'train_loss'
    }
#定义一个图像窗口
loss_window = vis.line(
    X=[0],
    Y=[0],
    opts=opt
)

#训练模型
for epoch in range(1000):
    y_pred = model(x_data)
    loss = criterion(y_pred, y_data)
    print(epoch, loss.item())

    #所有梯度归零化
    optimizer.zero_grad()
    #反向传播求出梯度
    loss.backward()
    #更新权重和偏置值,即w和b
    optimizer.step()
	#不断更新图像
    vis.line(X=[epoch], Y=[loss.item()], win=loss_window, opts=opt, update='append')

print('w= ', model.linear.weight.item())
print('b= ', model.linear.bias.item())

x_test = torch.Tensor([4.0])
y_test = model(x_test)

print('y_pred= ', y_test.data.item())

Guess you like

Origin blog.csdn.net/Er_Studying_Bai/article/details/120772226