【目标检测算法YOLO学习记录】深度残差网络,图片输入尺寸和输出张量维度

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/idwtwt/article/details/87910429

YOLO全称You Only Look Once,是一个端到端(end-to-end)的目标检测算法,现在已经发展到第三个版本。由于第三个版本已经比较复杂,我们选择学习第一个版本。

github上有个同学实现了一个pytorch的版本:https://github.com/xiongzihua/pytorch-YOLO-v1

我基于他的源码学习,学习过程中的代码修改放在:https://git.dev.tencent.com/zzpu/yolov1.git
 

1 源码中使用的残差网络

YOLO V1原论文中使用CNN来提取图像的特征,最后使用全连接层做回归预测,并且借鉴GooLeNet的思路,使用了1×1卷积核。而在V3中开始借鉴了ResNet的残差结构,使用这种结构可以让网络结构更深。所以在这个源码的实现中没有按照原论文参考GoogLeNet,而是直接修改了pytorch的ResNet的官方实现作为前置网络。看下论文原版ResNet的网络结构

输入图片尺寸要求224*224,输出feature map尺寸为7*7。但是yolo要求输入图片尺寸为448*448,输出张量希望是7*7*30(本实现代码输出希望是14*14*30),所以需要对PyTorch官方实现的ResNet进行必要的修改。

Pytorch中:

    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x) #输出B* block.expansion * 7*7

        x = self.avgpool(x)


        x = x.view(x.size(0), -1)
        

        x = self.fc(x)

        return x

到layer4输出B* block.expansion * 7*7(其中B为Batch size),因为输入图片尺寸由224*224变为448*448,所以到layer4层输出feature map尺寸为14*14,为了构造14*14*30的输出张量,我们调整输出维度就可以:

class ResNet(nn.Module):

    def __init__(self, block, layers, num_classes=1470):
        self.inplanes = 64
        super(ResNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                               bias=False)

        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, layers[0])
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
        # self.layer5 = self._make_layer(block, 512, layers[3], stride=2)

        #转换为256维的输出---feature map尺寸不变
        self.layer5 = self._make_detnet_layer(in_channels=2048)

        # self.avgpool = nn.AvgPool2d(14) #fit 448 input size
        # self.fc = nn.Linear(512 * block.expansion, num_classes)

        #调整输出维度为30
        self.conv_end = nn.Conv2d(256, 30, kernel_size=3, stride=1, padding=1, bias=False)
        self.bn_end = nn.BatchNorm2d(30)
        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
                m.weight.data.normal_(0, math.sqrt(2. / n))
            elif isinstance(m, nn.BatchNorm2d):
                m.weight.data.fill_(1)
                m.bias.data.zero_()

    def _make_layer(self, block, planes, blocks, stride=1):
        downsample = None
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2d(self.inplanes, planes * block.expansion,
                          kernel_size=1, stride=stride, bias=False),
                nn.BatchNorm2d(planes * block.expansion),
            )

        layers = []
        layers.append(block(self.inplanes, planes, stride, downsample))
        self.inplanes = planes * block.expansion
        for i in range(1, blocks):
            layers.append(block(self.inplanes, planes))

        return nn.Sequential(*layers)
    
    def _make_detnet_layer(self,in_channels):
        layers = []

        #尺寸不变
        layers.append(detnet_bottleneck(in_planes=in_channels, planes=256, block_type='B'))
        layers.append(detnet_bottleneck(in_planes=256, planes=256, block_type='A'))
        layers.append(detnet_bottleneck(in_planes=256, planes=256, block_type='A'))
        return nn.Sequential(*layers)

    def forward(self, x):

        x = self.conv1(x) # 输出:224*224

        x = self.bn1(x)

        x = self.relu(x) # 输出:224*224


        x = self.maxpool(x) # 输出:112*112



        x = self.layer1(x) # 输出:112*112


        x = self.layer2(x) # 输出:56*56



        x = self.layer3(x) # 输出:28*28


        x = self.layer4(x) # 输出:14*14


        x = self.layer5(x) # 输出:14*14


        x = self.conv_end(x) # 输出:1*30*14*14


        x = self.bn_end(x) # 输出:1*30*14*14



        x = torch.sigmoid(x) # 输出:1*30*14*14


        # x = x.view(-1,7,7,30)
        x = x.permute(0,2,3,1)   # 输出:1*14*14*30                 #(-1,7,7,30)


        return x

猜你喜欢

转载自blog.csdn.net/idwtwt/article/details/87910429