DL_6——PyTorch神经网络基础

1 块

1.1 自定义块

在PyTorch中,我们可以很灵活的使用nn.Module这个父类来构造我们想要的层和块。

并且,在创建nn.Module的子类时,我们只需要重写__init__forward这两个方法。在__init__中定义好网络结构,在forward中定义好如何进行前向计算。

# -*- coding: utf-8 -*- 
# @Time : 2021/9/13 20:12 
# @Author : Amonologue
# @software : pycharm   
# @File : use_PyTorch_module.py
import torch
from torch import nn


class MySequential(nn.Module):
    def __init__(self, *args):
        super().__init__()
        for block in args:
            self._modules[block] = block

    def forward(self, X):
        for block in self._modules.values():
            X = block(X)
        return X


if __name__ == '__main__':
    X = torch.rand(2, 20)
    net = MySequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))
    print(net(X))

2 参数管理

2.1 参数访问

2.2 参数初始化

2.3 参数绑定

3 自定义层

同样的和构造块一样,继承nn.Module类,然后重写__init____forward即可。

3.1 不带参数的层

# -*- coding: utf-8 -*- 
# @Time : 2021/9/13 20:12 
# @Author : Amonologue
# @software : pycharm   
# @File : use_PyTorch_module.py
import torch
from torch import nn


class MySequential(nn.Module):
    def __init__(self, *args):
        super().__init__()
        for block in args:
            self._modules[block] = block

    def forward(self, X):
        for block in self._modules.values():
            X = block(X)
        return X


if __name__ == '__main__':
    X = torch.rand(2, 20)
    net = MySequential(nn.Linear(20, 256), nn.ReLU(), nn.Linear(256, 10))
    print(net(X))

3.2 带参数的层

# -*- coding: utf-8 -*- 
# @Time : 2021/9/13 20:12 
# @Author : Amonologue
# @software : pycharm   
# @File : use_PyTorch_module.py
import torch
from torch import nn


class MyLayer(nn.Module):
    def __init__(self, *args):
        super().__init__()
        self.weight = nn.Parameter(torch.randn(args[0], args[1]))
        self.bias = nn.Parameter(torch.randn(args[1]))

    def forward(self, X):
        linear = torch.matmul(X, self.weight.data) + self.bias.data
        return nn.functional.relu(linear)


if __name__ == '__main__':
    layer = MyLayer(5, 3)
    print(layer.weight)
    print(layer(torch.FloatTensor([1, 2, 3, 4, 5])))

4 读写文件

训练模型的结果需要保存下来,使用torch自带的api来保存训练得到的参数。

4.1 加载和保存Tensor

# -*- coding: utf-8 -*- 
# @Time : 2021/9/13 21:16 
# @Author : Amonologue
# @software : pycharm   
# @File : load_and_save_tensor.py
import torch


if __name__ == '__main__':
    # 读写一个张量
    x = torch.arange(4)
    print(x)
    torch.save(x, 'x_file.pkl')
    x2 = torch.load('x_file.pkl')
    print(x2)
    # 读写一个张量列表
    y = torch.zeros(4)
    print(y)
    torch.save([x, y], 'xy_file.pkl')
    x3, y2 = torch.load('xy_file.pkl')
    print(x2, y2, sep='\t')
    # 读写一个张量字典
    mydict = {
    
    'x': x, 'y': y}
    print(mydict)
    torch.save(mydict, 'dict_file.pkl')
    mydict2 = torch.load('dict_file.pkl')
    print(mydict2)

4.2 加载和保存模型参数

torch.save(net.state_dict(), 'params.pkl')
net.load_state_dict(torch.load('params.pkl'))

4.3 加载和保存模型

torch.save(net, 'model.pkl')
new_net = torch.load('model.pkl')

猜你喜欢

转载自blog.csdn.net/CesareBorgia/article/details/120274598