动手学深度学习——网络中的网络NIN

1、全连接层的问题

卷积层需要较少的参数

但卷积层后的第一个全连接层的参数:

  • LeNet 16x5x5x120=48k;
  • AlexNet  256x5x5x4096=26M;
  • VGG  512x7x7x4096=102M;

2、NIN块

一个卷积层后跟两个全连接层,步幅1,无填充,输出形状跟卷积层输出一样,起到全连接层的作用。

3、NIN结构

  • 无全连接层;
  • 交替使用NIN块和步幅为2的最大池化层,逐步减少高宽和增大通道数;
  • 最后使用全局平均池化层得到输出,其输入通道数是类别数。

4、总结

NIN块使用卷积层加1x1卷积层,后者对每个像素增加了非线性性;

NIN使用全局平均池化层来替代VGG和AlexNet中的全连接层,不容易过拟合,更少的参数个数;

5、代码实现

import time
import torch
import torch.nn.functional as F
from torch import nn, optim
import sys



sys.path.append("..")
import d2lzh_pytorch as d2l
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

def nin_block(in_channels, out_channels, kernel_size, stride,padding):
     blk = nn.Sequential(nn.Conv2d(in_channels, out_channels,kernel_size, stride, padding),
           nn.ReLU(),
           nn.Conv2d(out_channels, out_channels,kernel_size=1),
           nn.ReLU(),
            nn.Conv2d(out_channels, out_channels, kernel_size=1),
            nn.ReLU())
     return blk

"""NIN模型"""
class GlobalAvgPool2d(nn.Module):
 # 全局平均池化层可通过将池化窗⼝形状设置成输⼊的⾼和宽实现
 def __init__(self):
        super(GlobalAvgPool2d, self).__init__()
 def forward(self, x):
        return F.avg_pool2d(x, kernel_size=x.size()[2:])
net = nn.Sequential(
 nin_block(1, 96, kernel_size=11, stride=4, padding=0),
 nn.MaxPool2d(kernel_size=3, stride=2),
 nin_block(96, 256, kernel_size=5, stride=1, padding=2),
 nn.MaxPool2d(kernel_size=3, stride=2),
 nin_block(256, 384, kernel_size=3, stride=1, padding=1),
 nn.MaxPool2d(kernel_size=3, stride=2),
 nn.Dropout(0.5),
 # 标签类别数是10
 nin_block(384, 10, kernel_size=3, stride=1, padding=1),
 GlobalAvgPool2d(),
 # 将四维的输出转成⼆维的输出,其形状为(批量⼤⼩, 10)
 d2l.FlattenLayer())

#构建⼀个数据样本来查看每⼀层的输出形状。
X = torch.rand(1, 1, 224, 224)
for name, blk in net.named_children():
    X = blk(X)
    print(name, 'output shape: ', X.shape)

"""获取数据和训练模型"""
#使用⽤Fashion-MNIST数据集来训练模型。
batch_size = 128
# 如出现“out of memory”的报错信息,可减⼩batch_size或resize
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size,
resize=224)
lr, num_epochs = 0.002, 5
optimizer = torch.optim.Adam(net.parameters(), lr=lr)
d2l.train_ch5(net, train_iter, test_iter, batch_size, optimizer,device, num_epochs)


猜你喜欢

转载自blog.csdn.net/qq_42012782/article/details/123324777