Pytorch VGG16源码解读

  • 我感觉我已经和时代脱轨了,有的网络已经发布好多年,已经可以视为“经典”的存在,而本菜鸟还是一知半解,刚开始了解!退而结网吧,现在开始看一下源码,了解一下经典。。。

  • 一个完成的VGG16包含如下

# 摘抄自Pytorch源代码,应该不会侵权吧...

# VGG类,输入特征层和类别个数,得到类别向量

class VGG(nn.Module):

    def __init__(self, features, num_classes=1000, init_weights=True):
        super(VGG, self).__init__()
        self.features = features  # 从参数输入特征提取流程 也即一堆CNN等
        self.avgpool = nn.AdaptiveAvgPool2d((7, 7))
        self.classifier = nn.Sequential(  # 定义分类器
            nn.Linear(512 * 7 * 7, 4096),
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, 4096),
            nn.ReLU(True),
            nn.Dropout(),
            nn.Linear(4096, num_classes),
        )
        if init_weights:
            self._initialize_weights()

    def forward(self, x):
        x = self.features(x)  # 把输入图像通过feature计算得到特征层
        x = self.avgpool(x)
        x = torch.flatten(x, 1)
        x = self.classifier(x)
        return x

    def _initialize_weights(self):
        for m in self.modules():
            if isinstance(m, nn.Conv2d):  # isinstance 检查m是哪一个类型
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
                if m.bias is not None:
                    nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)
            elif isinstance(m, nn.Linear):
                nn.init.normal_(m.weight, 0, 0.01)
                nn.init.constant_(m.bias, 0)

'''
比较关键的一层,从cfg构建特征提取网络
'''
def make_layers(cfg, batch_norm=False):
    layers = []
    in_channels = 3
    for v in cfg:
        if v == 'M':
            layers += [nn.MaxPool2d(kernel_size=2, stride=2)]
        else:
            conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)
            if batch_norm:
                layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)]
            else:
                layers += [conv2d, nn.ReLU(inplace=True)]
            in_channels = v
    return nn.Sequential(*layers)



# cfg 用字母作为字典索引,方便获得对应的列表  vgg16对应D 详见论文
cfgs = {
    'A': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'B': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],
    'D': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],
    'E': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],
}

'''
_vgg的定义 通过读取cfg构建输入参数对应的网络
如果要构建vgg16,输入参数:
arch = 'vgg16'
cfg = 'D'
batch_norm = False
'''

def _vgg(arch, cfg, batch_norm, pretrained, progress, **kwargs):
    if pretrained:
        kwargs['init_weights'] = False
    model = VGG(make_layers(cfgs[cfg], batch_norm=batch_norm), **kwargs)
    if pretrained:  # 参数加载 这里我的保存在了默认路径
        state_dict = load_state_dict_from_url(model_urls[arch],
                                              progress=progress)
        model.load_state_dict(state_dict)
    return model

# models.vgg16  跳转到这个函数,用_vgg 构造出vgg16
def vgg16(pretrained=False, progress=True, **kwargs):
    return _vgg('vgg16', 'D', False, pretrained, progress, **kwargs)


// 从models加载一个vgg16 net    实例化?
net = models.vgg16(pretrained=True) 

以上代码,便构建了一个vgg16,训练和推理过程就和其他网络一样了

  • 参考
    Pytorch源码
发布了51 篇原创文章 · 获赞 1 · 访问量 3074

猜你喜欢

转载自blog.csdn.net/m0_38139098/article/details/105454662