Pytorch:图像语义分割-FCN, U-Net, SegNet, 预训练网络

Pytorch: 图像语义分割-FCN, U-Net, SegNet, 预训练网络

Copyright: Jingmin Wei, Pattern Recognition and Intelligent System, School of Artificial and Intelligence, Huazhong University of Science and Technology

Pytorch教程专栏链接



本教程不商用,仅供学习和参考交流使用,如需转载,请联系本人。

Reference

Fully Convolutional Networks for Semantic Segmentation

U-Net: Convolutional Networks for Biomedical Image Segmentation

SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import PIL

import torch.nn as nn
import torch
import torch.nn.functional as F
from torchvision import transforms
import torchvision
# 模型加载选择GPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
print(torch.cuda.device_count())
print(torch.cuda.get_device_name(0))
cuda
1
GeForce MX250

概述

语义分割是对图像在像素级别上进行分类的方法,在一张图像中,属于同一类的像素点都要被预测为相同的类,因此语义分割网络是从像素级别来理解图像。

我们需要正确区分语义分割和实例分制人物。虽然它们在名称上很相似,但是它们属于不同的实计算机视觉任务。例如,一张照片中有多个人,针对语义分割任务,只需将所有人的像素都归为一类即可, 但是针对实例分割任务,则需要将不同人的像素归为不同的类。简单来说,实例分割会比语义分割所做的工作更进一步。

随着深度学习在计算机视觉领域的发展,提出了多种基于深度学习方法的图
视像语义分割网络, 如 FCN, U-Net, SegNet, DeepLab 等。下面对 FCN, U-Net, SegNet 等网络结构进行一些简单的介绍。

FCN

FCN(Fully Convolutional Networks) 语义分割网络是在图像语义分割文章 Fully Convolutional Networks for Semantic Segmentation 中提出的全卷积网络,该文章是基于深度网络进行图像语义分割的开山之作。模型是全卷积的网络,不含全连接分类层,因此可以输入任意图像尺寸。其网络进行图像语义分割的示意图如图所示。FCN 的主要思想是:

  1. 对于一般的 CNN 图像分类网络,如 VGG , GooLeNet, ResNet,都在网络的最后通过全连接层,并经过 softmax 函数后进行分类。但这只能标识整个图片的类别,不能标识每个像素点的类别,所以这种全连接的算子并不适用于像素分类级别的图像分割。因此 FCN 提出把网络最后几个全连接层都换成卷积操作,以获得和输人图像尺寸相同的特征映射,然后通过 softmax 获得每个像素点的分类信息,即可实现基于像素点分类的图像分割。

  2. 端到端像素级语义分割任务,需要输出分类结果尺寸和输人图像尺寸
    一致, 而基于卷积+池化的网络结构,会缩小图片尺寸。因此FCN引|人反卷积( deconvolution,和前面自编码器用到的 transpose convolution 转置卷积的功能一致)操作,对缩小后的特征映射进行上采样,从而满足像素级的图像分割要求。

  3. 为了更有效地利用特征映射的信息,FCN 提出一种跨层连接结构,将浅层
    和深层的目标位置信息的特征映射进行融合。在图像风格迁移的教程中,我们知道,浅层特征图位置信息强但是语义信息弱,深层特征图相反。FCN 将浅层目标位置信息强但语义信息弱的特征映射,与深层目标位置信息弱但语义信息强的特征映射进行融合,以此来提升网络对图像进行语义分割的性能。

如图是论文 Fully Convolutional Networks for Semantic Segmentation 提出的全卷积网络对图像做语义分割的原理图。

在这里插入图片描述

下图展示了不同的 FCN 语义分割操作方法,其中 FCN-32s 就是将最后的卷积或池化结果通过转置卷积,直接将特征映射的尺寸扩大 32 32 32 倍进行输出,而 FCN-16s 则是联合前面一次的结果将特征映射进行 16 16 16 倍的放大输出,而 FCN-8s 是联合前面两次的结果,通过转置卷积将特征映射的尺寸进行 8 8 8 倍的放大输出.在 FCN-8s 中将进行以下的操作步骤:

  1. 将最后一层的特征映射 P5(在 VGG19 中是第 5 5 5 个最大值池化层)通过转置卷积扩大 2 2 2 倍,得到新的特征映射 T5,并和 pool4 的特征映射 P4 相加可得到 T5+P4 。

  2. 将 T5+P4 通过转置卷积扩大 2 2 2 倍得到 T4,然后与 pool3 的特征映射 P3 相加得到 T4+P3 。

  3. 通过转置卷积,将特征映射 T4+P3 的尺寸扩大 8 8 8 倍,得到和输入形状一样大的结果.

在这里插入图片描述

FCN 网络搭建
from torchvision.models import vgg19
class FCN8s(nn.Module):
    def __init__(self, num_classes):
        # 初始化构函
        super().__init__()
        self.num_classes = num_classes
        model_vgg19 = vgg19(pretrained=True)
        self.base_model = model_vgg19.features
        # 定义几个需要的层操作
        self.relu = nn.ReLU(inplace=True) # 定义ReLU激活层
        self.deconv1 = nn.ConvTranspose2d(512, 512, kernel_size=3, stride=2, 
                                         padding=1, dilation=1, output_padding=1) # 使用转置卷积将特征映射进行升维
        self.bn1 = nn.BatchNorm2d(512) # 块标准化,让传输的数据合理的分布,加速训练的过程
        self.deconv2 = nn.ConvTranspose2d(512, 256, 3, 2, 1, 1, 1)
        self.bn2 = nn.BatchNorm2d(256)
        self.deconv3 = nn.ConvTranspose2d(256, 128, 3, 2, 1, 1, 1)
        self.bn3 = nn.BatchNorm2d(128)
        self.deconv4 = nn.ConvTranspose2d(128, 64, 3, 2, 1, 1, 1)
        self.bn4 = nn.BatchNorm2d(64)
        self.deconv5 = nn.ConvTranspose2d(64, 32, 3, 2, 1, 1, 1)
        self.bn5 = nn.BatchNorm2d(32)
        self.classifier = nn.Conv2d(32, num_classes, kernel_size=1)
        # 定义layers的参数
        self.layers = {
    
    "4": "maxpool_1", "9": "maxpool_2", "18": "maxpool_3", "27": "maxpool_4", "36": "maxpool_5"}
        
    def forward(self, x):
        # 调用搭建的网络对输入数据进行前向传递
        output = {
    
    }
        
        for name, layer in self.base_model._modules.items():
            # 从第一层开始获取图像的特征
            x = layer(x)
            if name in self.layers:
                output[self.layers[name]] = x # 如果是layers参数指定的特征,则保存到output中
                
        x5 = output["maxpool_5"] # size = (N, 512, x.H/32, x.W/32)
        x4 = output["maxpool_4"] # size = (N, 512, x.H/16, x.W/16)
        x3 = output["maxpool_3"] # size = (N, 256, x.H/8, x.W/8)
        # size = (N, 512, x.H/32, x.W/32)
        score = self.relu(self.deconv1(x5))
        # 对应元素相加,size = (N, 512, x.H/16, x.W/16)
        score = self.bn1(score + x4)
        # size = (N, 256, x.H/8, x.W/8)
        score = self.relu(self.deconv2(score))
        # 对应元素相加,size = (N, 256, x.H/8, x.W/8)
        score = self.bn2(score + x3)
        # size = (N, 128, x.H/4, x.W/4)
        score = self.bn3(self.relu(self.deconv3(score)))
        # size = (N, 64, x.H/2, x.W/2)
        score = self.bn4(self.relu(self.deconv4(score)))
        # size = (N, 32, x.H, x.W)
        score = self.bn5(self.relu(self.deconv5(score)))
        score = self.classifier(score)
        return score ## size = (N, n_class, x.H, x.W)
 # 假设有21类要分割
fcn8s = FCN8s(21)
from torchsummary import summary
from torchviz import make_dot
summary(fcn8s, input_size=(3, 512, 640), device='cpu')
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 512, 640]           1,792
              ReLU-2         [-1, 64, 512, 640]               0
            Conv2d-3         [-1, 64, 512, 640]          36,928
              ReLU-4         [-1, 64, 512, 640]               0
         MaxPool2d-5         [-1, 64, 256, 320]               0
            Conv2d-6        [-1, 128, 256, 320]          73,856
              ReLU-7        [-1, 128, 256, 320]               0
            Conv2d-8        [-1, 128, 256, 320]         147,584
              ReLU-9        [-1, 128, 256, 320]               0
        MaxPool2d-10        [-1, 128, 128, 160]               0
           Conv2d-11        [-1, 256, 128, 160]         295,168
             ReLU-12        [-1, 256, 128, 160]               0
           Conv2d-13        [-1, 256, 128, 160]         590,080
             ReLU-14        [-1, 256, 128, 160]               0
           Conv2d-15        [-1, 256, 128, 160]         590,080
             ReLU-16        [-1, 256, 128, 160]               0
           Conv2d-17        [-1, 256, 128, 160]         590,080
             ReLU-18        [-1, 256, 128, 160]               0
        MaxPool2d-19          [-1, 256, 64, 80]               0
           Conv2d-20          [-1, 512, 64, 80]       1,180,160
             ReLU-21          [-1, 512, 64, 80]               0
           Conv2d-22          [-1, 512, 64, 80]       2,359,808
             ReLU-23          [-1, 512, 64, 80]               0
           Conv2d-24          [-1, 512, 64, 80]       2,359,808
             ReLU-25          [-1, 512, 64, 80]               0
           Conv2d-26          [-1, 512, 64, 80]       2,359,808
             ReLU-27          [-1, 512, 64, 80]               0
        MaxPool2d-28          [-1, 512, 32, 40]               0
           Conv2d-29          [-1, 512, 32, 40]       2,359,808
             ReLU-30          [-1, 512, 32, 40]               0
           Conv2d-31          [-1, 512, 32, 40]       2,359,808
             ReLU-32          [-1, 512, 32, 40]               0
           Conv2d-33          [-1, 512, 32, 40]       2,359,808
             ReLU-34          [-1, 512, 32, 40]               0
           Conv2d-35          [-1, 512, 32, 40]       2,359,808
             ReLU-36          [-1, 512, 32, 40]               0
        MaxPool2d-37          [-1, 512, 16, 20]               0
  ConvTranspose2d-38          [-1, 512, 32, 40]       2,359,808
             ReLU-39          [-1, 512, 32, 40]               0
      BatchNorm2d-40          [-1, 512, 32, 40]           1,024
  ConvTranspose2d-41          [-1, 256, 64, 80]       1,179,904
             ReLU-42          [-1, 256, 64, 80]               0
      BatchNorm2d-43          [-1, 256, 64, 80]             512
  ConvTranspose2d-44        [-1, 128, 128, 160]         295,040
             ReLU-45        [-1, 128, 128, 160]               0
      BatchNorm2d-46        [-1, 128, 128, 160]             256
  ConvTranspose2d-47         [-1, 64, 256, 320]          73,792
             ReLU-48         [-1, 64, 256, 320]               0
      BatchNorm2d-49         [-1, 64, 256, 320]             128
  ConvTranspose2d-50         [-1, 32, 512, 640]          18,464
             ReLU-51         [-1, 32, 512, 640]               0
      BatchNorm2d-52         [-1, 32, 512, 640]              64
           Conv2d-53         [-1, 21, 512, 640]             693
================================================================
Total params: 23,954,069
Trainable params: 23,954,069
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 3.75
Forward/backward pass size (MB): 2073.75
Params size (MB): 91.38
Estimated Total Size (MB): 2168.88
----------------------------------------------------------------

U-Net

U-Net 语义分割网络是由 Olaf Ronneberger, Philipp Fischer, Thomas Brox 在 2015 2015 2015 年提出的一种分割网络,能够适应较小的训练集。其设计思想是基于 FCN 网络,在整个网络中仅有卷积层,没有全连接层。因为是适应小数据集的分割网络,故采用大量弹性形变的方式增强数据,以让模型更好地学习形变不变性,这种增强方式对于特定的领域,如医学图像处理来说十分好用。并且,U-Net 在不同的特征融合方式上,相较于 FCN 式的逐点相加,U-Net 则采用 DenseNet 的思想,在通道维度上进行拼接融合。参考 U-Net: Convolutional Networks for Biomedical Image Segmentation 一文,其网络用于图像语义分割的示意图如下图所示:

在这里插入图片描述

从图看出,网络的结构看上去像英文字母 “U” ,故将其命名为 U-Net 。其架构可以描述为:由用于特征提取的收缩路径(左边)和使用上采样方法的扩张路径(右边)组成。

左侧的收缩路径和传统的卷积网络一样,由卷积核尺寸为 3 × 3 3\times3 3×3 的非空洞卷积(且每次卷积后都经过 ReLU 函数),以及尺寸为 2 × 2 2\times2 2×2 ,步距为 2 2 2 的最大池组成。这个最大值池化就是下采样的过程,下采样后将 channels 变为 2 2 2 倍。

右侧的扩张路径包含 2 × 2 2\times2 2×2 的上卷积,上卷积的 output channels 为原先的一半,再与对应的特征图(裁剪后)串联起来(得到和原先一样大小的 channels ),再经过两个尺寸为 3 × 3 3\times3 3×3 的卷积和 ReLU 。在最后一层通过卷积核大小为 1 × 1 1\times1 1×1 的卷积作用得到想要的目标种类。在这个网络中,有 23 23 23 个卷积层。

另外值得关注的是,U-Net 与其他常见的分割网络有一点非常不同的地方:U-Net 采用了完全不同的特征融合方式:拼接(concatenate),类似于 DenseNet 中的稠密块,U-Net 采用将特征在 channel 维度拼接在一起,形成更厚的特征。而 FCN 融合时使用的对应点相加,更像是 ResNet 的深浅特征相加,并不形成更厚的特征。

总而言之,U-Net 建立在 FCN 的网络架构上,作者修改并扩大了这个网络框架,使其能够使用很少的训练图像就得到很精确的分割结果;添加了上采样阶段,并且添加了很多的特征通道,允许更多的原图像纹理的信息在高分辨率的 layers 中进行传播。U-Net 没有 FC 全连接层,且全程使用 valid 方法来进行卷积,这样的话可以保证分割的结果都是基于没有缺失的上下文特征得到的,同时输入输出的图像尺寸也因此和 FCN 不太一样。

U-Net 网络搭建
class conv_block(nn.Module):
    def __init__(self, ch_in, ch_out):
        super(conv_block, self).__init__()
        self.conv = nn.Sequential(
            nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=1, padding=1, bias=True),
            nn.BatchNorm2d(ch_out),
            nn.ReLU(inplace=True),
            nn.Conv2d(ch_out, ch_out, kernel_size=3, stride=1, padding=1, bias=True),
            nn.BatchNorm2d(ch_out),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        x = self.conv(x)
        return x

class up_conv(nn.Module):
    def __init__(self, ch_in, ch_out):
        super(up_conv, self).__init__()
        self.up = nn.Sequential(
            nn.Upsample(scale_factor=2),
            nn.Conv2d(ch_in, ch_out, kernel_size=3, stride=1, padding=1, bias=True),
            nn.BatchNorm2d(ch_out),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        x = self.up(x)
        return x


class U_Net(nn.Module):
    def __init__(self, img_ch=3, output_ch=1):
        super(U_Net, self).__init__()

        self.Maxpool = nn.MaxPool2d(kernel_size=2, stride=2)

        self.Conv1 = conv_block(ch_in=img_ch, ch_out=64)
        self.Conv2 = conv_block(ch_in=64, ch_out=128)
        self.Conv3 = conv_block(ch_in=128, ch_out=256)
        self.Conv4 = conv_block(ch_in=256, ch_out=512)
        self.Conv5 = conv_block(ch_in=512, ch_out=1024)

        self.Up5 = up_conv(ch_in=1024, ch_out=512)
        self.Up_conv5 = conv_block(ch_in=1024, ch_out=512)

        self.Up4 = up_conv(ch_in=512, ch_out=256)
        self.Up_conv4 = conv_block(ch_in=512, ch_out=256)

        self.Up3 = up_conv(ch_in=256, ch_out=128)
        self.Up_conv3 = conv_block(ch_in=256, ch_out=128)

        self.Up2 = up_conv(ch_in=128, ch_out=64)
        self.Up_conv2 = conv_block(ch_in=128, ch_out=64)

        self.Conv_1x1 = nn.Conv2d(64, output_ch, kernel_size=1, stride=1, padding=0)

    def forward(self, x):
        # encoding path
        x1 = self.Conv1(x)

        x2 = self.Maxpool(x1)
        x2 = self.Conv2(x2)

        x3 = self.Maxpool(x2)
        x3 = self.Conv3(x3)

        x4 = self.Maxpool(x3)
        x4 = self.Conv4(x4)

        x5 = self.Maxpool(x4)
        x5 = self.Conv5(x5)

        # decoding + concat path
        d5 = self.Up5(x5)
        d5 = torch.cat((x4, d5), dim=1)

        d5 = self.Up_conv5(d5)

        d4 = self.Up4(d5)
        d4 = torch.cat((x3, d4), dim=1)
        d4 = self.Up_conv4(d4)

        d3 = self.Up3(d4)
        d3 = torch.cat((x2, d3), dim=1)
        d3 = self.Up_conv3(d3)

        d2 = self.Up2(d3)
        d2 = torch.cat((x1, d2), dim=1)
        d2 = self.Up_conv2(d2)

        d1 = self.Conv_1x1(d2)

        return d1
# 假设有21类要分割
unet = U_Net(3, 21)
summary(unet, input_size=(3, 512, 640), device='cpu')
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 512, 640]           1,792
       BatchNorm2d-2         [-1, 64, 512, 640]             128
              ReLU-3         [-1, 64, 512, 640]               0
            Conv2d-4         [-1, 64, 512, 640]          36,928
       BatchNorm2d-5         [-1, 64, 512, 640]             128
              ReLU-6         [-1, 64, 512, 640]               0
        conv_block-7         [-1, 64, 512, 640]               0
         MaxPool2d-8         [-1, 64, 256, 320]               0
            Conv2d-9        [-1, 128, 256, 320]          73,856
      BatchNorm2d-10        [-1, 128, 256, 320]             256
             ReLU-11        [-1, 128, 256, 320]               0
           Conv2d-12        [-1, 128, 256, 320]         147,584
      BatchNorm2d-13        [-1, 128, 256, 320]             256
             ReLU-14        [-1, 128, 256, 320]               0
       conv_block-15        [-1, 128, 256, 320]               0
        MaxPool2d-16        [-1, 128, 128, 160]               0
           Conv2d-17        [-1, 256, 128, 160]         295,168
      BatchNorm2d-18        [-1, 256, 128, 160]             512
             ReLU-19        [-1, 256, 128, 160]               0
           Conv2d-20        [-1, 256, 128, 160]         590,080
      BatchNorm2d-21        [-1, 256, 128, 160]             512
             ReLU-22        [-1, 256, 128, 160]               0
       conv_block-23        [-1, 256, 128, 160]               0
        MaxPool2d-24          [-1, 256, 64, 80]               0
           Conv2d-25          [-1, 512, 64, 80]       1,180,160
      BatchNorm2d-26          [-1, 512, 64, 80]           1,024
             ReLU-27          [-1, 512, 64, 80]               0
           Conv2d-28          [-1, 512, 64, 80]       2,359,808
      BatchNorm2d-29          [-1, 512, 64, 80]           1,024
             ReLU-30          [-1, 512, 64, 80]               0
       conv_block-31          [-1, 512, 64, 80]               0
        MaxPool2d-32          [-1, 512, 32, 40]               0
           Conv2d-33         [-1, 1024, 32, 40]       4,719,616
      BatchNorm2d-34         [-1, 1024, 32, 40]           2,048
             ReLU-35         [-1, 1024, 32, 40]               0
           Conv2d-36         [-1, 1024, 32, 40]       9,438,208
      BatchNorm2d-37         [-1, 1024, 32, 40]           2,048
             ReLU-38         [-1, 1024, 32, 40]               0
       conv_block-39         [-1, 1024, 32, 40]               0
         Upsample-40         [-1, 1024, 64, 80]               0
           Conv2d-41          [-1, 512, 64, 80]       4,719,104
      BatchNorm2d-42          [-1, 512, 64, 80]           1,024
             ReLU-43          [-1, 512, 64, 80]               0
          up_conv-44          [-1, 512, 64, 80]               0
           Conv2d-45          [-1, 512, 64, 80]       4,719,104
      BatchNorm2d-46          [-1, 512, 64, 80]           1,024
             ReLU-47          [-1, 512, 64, 80]               0
           Conv2d-48          [-1, 512, 64, 80]       2,359,808
      BatchNorm2d-49          [-1, 512, 64, 80]           1,024
             ReLU-50          [-1, 512, 64, 80]               0
       conv_block-51          [-1, 512, 64, 80]               0
         Upsample-52        [-1, 512, 128, 160]               0
           Conv2d-53        [-1, 256, 128, 160]       1,179,904
      BatchNorm2d-54        [-1, 256, 128, 160]             512
             ReLU-55        [-1, 256, 128, 160]               0
          up_conv-56        [-1, 256, 128, 160]               0
           Conv2d-57        [-1, 256, 128, 160]       1,179,904
      BatchNorm2d-58        [-1, 256, 128, 160]             512
             ReLU-59        [-1, 256, 128, 160]               0
           Conv2d-60        [-1, 256, 128, 160]         590,080
      BatchNorm2d-61        [-1, 256, 128, 160]             512
             ReLU-62        [-1, 256, 128, 160]               0
       conv_block-63        [-1, 256, 128, 160]               0
         Upsample-64        [-1, 256, 256, 320]               0
           Conv2d-65        [-1, 128, 256, 320]         295,040
      BatchNorm2d-66        [-1, 128, 256, 320]             256
             ReLU-67        [-1, 128, 256, 320]               0
          up_conv-68        [-1, 128, 256, 320]               0
           Conv2d-69        [-1, 128, 256, 320]         295,040
      BatchNorm2d-70        [-1, 128, 256, 320]             256
             ReLU-71        [-1, 128, 256, 320]               0
           Conv2d-72        [-1, 128, 256, 320]         147,584
      BatchNorm2d-73        [-1, 128, 256, 320]             256
             ReLU-74        [-1, 128, 256, 320]               0
       conv_block-75        [-1, 128, 256, 320]               0
         Upsample-76        [-1, 128, 512, 640]               0
           Conv2d-77         [-1, 64, 512, 640]          73,792
      BatchNorm2d-78         [-1, 64, 512, 640]             128
             ReLU-79         [-1, 64, 512, 640]               0
          up_conv-80         [-1, 64, 512, 640]               0
           Conv2d-81         [-1, 64, 512, 640]          73,792
      BatchNorm2d-82         [-1, 64, 512, 640]             128
             ReLU-83         [-1, 64, 512, 640]               0
           Conv2d-84         [-1, 64, 512, 640]          36,928
      BatchNorm2d-85         [-1, 64, 512, 640]             128
             ReLU-86         [-1, 64, 512, 640]               0
       conv_block-87         [-1, 64, 512, 640]               0
           Conv2d-88         [-1, 21, 512, 640]           1,365
================================================================
Total params: 34,528,341
Trainable params: 34,528,341
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 3.75
Forward/backward pass size (MB): 6197.50
Params size (MB): 131.72
Estimated Total Size (MB): 6332.97
----------------------------------------------------------------

SegNet

SegNet 语义分割网络是由 Vijay Badrinarayanan, Alex Kendall, Roberto Cipolla 在 2015 2015 2015 年提出的,本质上是一种基于图像自编码优化的卷积神经网络。

前面的教程我们已经详细讲解过图像自编码器,所以这里理解 SegNet 的原理并不困难。

SegNet 主要由编码网络,解码网络和逐像素分类器构成。其新颖之处在于解码网络采用的上采样方式,即解码网络的上采样使用了编码网络阶段下采样的最大池化的索引。其相比于 FCN,占用的存储量和训练时间都要更优.参考 SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation 一文,其网络结构如图 ,主要分为三层:

在这里插入图片描述

编码网络层 (Encoder Network):

使用预训练好的 VGG16 网络,除去全连接层,只留下前面的 13 13 13 层卷积层,作为编码层。主要原因在于为了保持更高的分辨率,同时减少了网络参数,提高训练效率。

编码网络分为 5 5 5 个 block,每个 block 由 Convolution+Batch Normalization+Max Pooling,即卷积层,归一化层和最大值池化层组成。池化层实现下采样操作,类似 U-Net,下采样的核大小为 2 2 2 ,步长为 2 2 2

解码网络层 (Decoder Network):

每一个编码器都会对应一个解码器,故解码层也有 13 13 13 层,它将低分辨率的特征映射成和输入一样大小的分类器,让整个网络成一个大致对称的结构。

SegNet 在编码层使用最大值池化时会记录池化前后的对应位置,在解码层会用前面记录的位置还原,这也是 SegNet 的创新之处。解码网络同样也分为 5 5 5 个 block,每个 block 由 Up Sampling+Convolution+Batch normalization,即上采样层,卷积层和归一化层组成。

像素分类层 (Pixelwise Classification Layer):

解码层的输出会送到分类层,最终为每个像素独立的产生类别概率。

在编码网络的最后一层输出,加一个卷积层实现像素级的分类,卷积核的个数即为分类的通道数,每个通道代表一类分割结果。

对比 FCN, U-Net 不难发现,SegNet 在上采样时用编码层的索引信息,直接将数据放回对应位置,后面再接卷积层训练学习。这个上采样层不需要训练学习而只是占用了一定的存储空间。而 FCN, U-Net 是将特征反卷积后得到上采样层,这一过程需要学习,同时将编码阶段对应的特征进行降维,使得通道维度和上采样维度相同,这样才能做到 FCN 的像素逐点相加 / U-Net 的特征拼接,最终得到输出。

SegNet 网络搭建
class SegNet(nn.Module):
    def __init__(self, in_channels, out_channels):
        super(SegNet, self).__init__()
        self.conv11 = nn.Conv2d(in_channels, 64, kernel_size = 3, padding = 1)
        self.bn11 = nn.BatchNorm2d(64)
        self.conv12 = nn.Conv2d(64, 64, kernel_size = 3, padding = 1)
        self.bn12 = nn.BatchNorm2d(64)
        #maxpool1
        self.conv21 = nn.Conv2d(64, 128, kernel_size=3, padding=1)
        self.bn21 = nn.BatchNorm2d(128)
        self.conv22 = nn.Conv2d(128, 128, kernel_size=3, padding=1)
        self.bn22 = nn.BatchNorm2d(128)
        #maxpool2
        self.conv31 = nn.Conv2d(128, 256, kernel_size=3, padding=1)
        self.bn31 = nn.BatchNorm2d(256)
        self.conv32 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
        self.bn32 = nn.BatchNorm2d(256)
        self.conv33 = nn.Conv2d(256, 256, kernel_size=3, padding=1)
        self.bn33 = nn.BatchNorm2d(256)
        #maxpooling3
        self.conv41 = nn.Conv2d(256, 512, kernel_size=3, padding=1)
        self.bn41 = nn.BatchNorm2d(512)
        self.conv42 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn42 = nn.BatchNorm2d(512)
        self.conv43 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn43 = nn.BatchNorm2d(512)
        #maxpooling4
        self.conv51 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn51 = nn.BatchNorm2d(512)
        self.conv52 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn52 = nn.BatchNorm2d(512)
        self.conv53 = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn53 = nn.BatchNorm2d(512)
        #maxpooling5

        self.conv51d = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn51d = nn.BatchNorm2d(512)
        self.conv52d = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn52d = nn.BatchNorm2d(512)
        self.conv53d = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn53d = nn.BatchNorm2d(512)

        self.conv43d = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn43d = nn.BatchNorm2d(512)
        self.conv42d = nn.Conv2d(512, 512, kernel_size=3, padding=1)
        self.bn42d = nn.BatchNorm2d(512)
        self.conv41d = nn.Conv2d(512, 256, kernel_size=3, padding=1)
        self.bn41d = nn.BatchNorm2d(256)

        self.conv33d = nn.Conv2d(256, 256, kernel_size=3, padding=1)
        self.bn33d = nn.BatchNorm2d(256)
        self.conv32d = nn.Conv2d(256, 256, kernel_size=3, padding=1)
        self.bn32d = nn.BatchNorm2d(256)
        self.conv31d = nn.Conv2d(256, 128, kernel_size=3, padding=1)
        self.bn31d = nn.BatchNorm2d(128)

        self.conv22d = nn.Conv2d(128, 128, kernel_size=3, padding=1)
        self.bn22d = nn.BatchNorm2d(128)
        self.conv21d = nn.Conv2d(128, 64, kernel_size=3, padding=1)
        self.bn21d = nn.BatchNorm2d(64)

        self.conv12d = nn.Conv2d(64, 64, kernel_size=3, padding=1)
        self.bn12d = nn.BatchNorm2d(64)
        self.conv11d = nn.Conv2d(64, out_channels, kernel_size=3, padding=1)

    def forward(self, input):

        x11 = F.relu(self.bn11(self.conv11(input)))
        x12 = F.relu(self.bn12(self.conv12(x11)))
        x1p, id1 = F.max_pool2d_with_indices(x12,kernel_size = 2, stride = 2,return_indices = True)

        x21 = F.relu(self.bn21(self.conv21(x1p)))
        x22 = F.relu(self.bn22(self.conv22(x21)))
        x2p, id2 = F.max_pool2d_with_indices(x22, kernel_size=2, stride=2, return_indices=True)

        x31 = F.relu(self.bn31(self.conv31(x2p)))
        x32 = F.relu(self.bn32(self.conv32(x31)))
        x33 = F.relu(self.bn33(self.conv33(x32)))
        x3p, id3 = F.max_pool2d_with_indices(x33,kernel_size = 2, stride = 2,return_indices = True)

        x41 = F.relu(self.bn41(self.conv41(x3p)))
        x42 = F.relu(self.bn42(self.conv42(x41)))
        x43 = F.relu(self.bn43(self.conv43(x42)))
        x4p, id4 = F.max_pool2d_with_indices(x43,kernel_size = 2, stride = 2,return_indices = True)

        x51 = F.relu(self.bn51(self.conv51(x4p)))
        x52 = F.relu(self.bn52(self.conv52(x51)))
        x53 = F.relu(self.bn53(self.conv53(x52)))
        x5p, id5 = F.max_pool2d(x53, kernel_size = 2,stride = 2,return_indices =True)
        
        # print(x5p.size(), id5.size())
        #  unpooling - conv - bn - activation
        #            - conv - bn - activation
        #            - conv - bn - activation
        #            -

        x5d = F.max_unpool2d(x5p, id5, kernel_size=2, stride=2)
        x53d = F.relu(self.bn53d(self.conv53d(x5d)))
        x52d = F.relu(self.bn52d(self.conv52d(x53d)))
        x51d = F.relu(self.bn51d(self.conv51d(x52d)))

        x4d = F.max_unpool2d(x51d, id4, kernel_size=2, stride=2)
        x43d = F.relu(self.bn43d(self.conv43d(x4d)))
        x42d = F.relu(self.bn42d(self.conv42d(x43d)))
        x41d = F.relu(self.bn41d(self.conv41d(x42d)))

        x3d = F.max_unpool2d(x41d, id3, kernel_size=2, stride=2)
        x33d = F.relu(self.bn33d(self.conv33d(x3d)))
        x32d = F.relu(self.bn32d(self.conv32d(x33d)))
        x31d = F.relu(self.bn31d(self.conv31d(x32d)))

        x2d = F.max_unpool2d(x31d, id2, kernel_size=2, stride=2)
        x22d = F.relu(self.bn22d(self.conv22d(x2d)))
        x21d = F.relu(self.bn21d(self.conv21d(x22d)))

        x1d = F.max_unpool2d(x21d, id1, kernel_size=2, stride=2)
        x12d = F.relu(self.bn12d(self.conv12d(x1d)))
        x11d = self.conv11d(x12d)

        return x11d
# 假设有21类要分割
segnet = SegNet(3, 21)
summary(segnet, input_size=(3, 512, 640), device='cpu')
----------------------------------------------------------------
        Layer (type)               Output Shape         Param #
================================================================
            Conv2d-1         [-1, 64, 512, 640]           1,792
       BatchNorm2d-2         [-1, 64, 512, 640]             128
            Conv2d-3         [-1, 64, 512, 640]          36,928
       BatchNorm2d-4         [-1, 64, 512, 640]             128
            Conv2d-5        [-1, 128, 256, 320]          73,856
       BatchNorm2d-6        [-1, 128, 256, 320]             256
            Conv2d-7        [-1, 128, 256, 320]         147,584
       BatchNorm2d-8        [-1, 128, 256, 320]             256
            Conv2d-9        [-1, 256, 128, 160]         295,168
      BatchNorm2d-10        [-1, 256, 128, 160]             512
           Conv2d-11        [-1, 256, 128, 160]         590,080
      BatchNorm2d-12        [-1, 256, 128, 160]             512
           Conv2d-13        [-1, 256, 128, 160]         590,080
      BatchNorm2d-14        [-1, 256, 128, 160]             512
           Conv2d-15          [-1, 512, 64, 80]       1,180,160
      BatchNorm2d-16          [-1, 512, 64, 80]           1,024
           Conv2d-17          [-1, 512, 64, 80]       2,359,808
      BatchNorm2d-18          [-1, 512, 64, 80]           1,024
           Conv2d-19          [-1, 512, 64, 80]       2,359,808
      BatchNorm2d-20          [-1, 512, 64, 80]           1,024
           Conv2d-21          [-1, 512, 32, 40]       2,359,808
      BatchNorm2d-22          [-1, 512, 32, 40]           1,024
           Conv2d-23          [-1, 512, 32, 40]       2,359,808
      BatchNorm2d-24          [-1, 512, 32, 40]           1,024
           Conv2d-25          [-1, 512, 32, 40]       2,359,808
      BatchNorm2d-26          [-1, 512, 32, 40]           1,024
           Conv2d-27          [-1, 512, 32, 40]       2,359,808
      BatchNorm2d-28          [-1, 512, 32, 40]           1,024
           Conv2d-29          [-1, 512, 32, 40]       2,359,808
      BatchNorm2d-30          [-1, 512, 32, 40]           1,024
           Conv2d-31          [-1, 512, 32, 40]       2,359,808
      BatchNorm2d-32          [-1, 512, 32, 40]           1,024
           Conv2d-33          [-1, 512, 64, 80]       2,359,808
      BatchNorm2d-34          [-1, 512, 64, 80]           1,024
           Conv2d-35          [-1, 512, 64, 80]       2,359,808
      BatchNorm2d-36          [-1, 512, 64, 80]           1,024
           Conv2d-37          [-1, 256, 64, 80]       1,179,904
      BatchNorm2d-38          [-1, 256, 64, 80]             512
           Conv2d-39        [-1, 256, 128, 160]         590,080
      BatchNorm2d-40        [-1, 256, 128, 160]             512
           Conv2d-41        [-1, 256, 128, 160]         590,080
      BatchNorm2d-42        [-1, 256, 128, 160]             512
           Conv2d-43        [-1, 128, 128, 160]         295,040
      BatchNorm2d-44        [-1, 128, 128, 160]             256
           Conv2d-45        [-1, 128, 256, 320]         147,584
      BatchNorm2d-46        [-1, 128, 256, 320]             256
           Conv2d-47         [-1, 64, 256, 320]          73,792
      BatchNorm2d-48         [-1, 64, 256, 320]             128
           Conv2d-49         [-1, 64, 512, 640]          36,928
      BatchNorm2d-50         [-1, 64, 512, 640]             128
           Conv2d-51         [-1, 21, 512, 640]          12,117
================================================================
Total params: 29,455,125
Trainable params: 29,455,125
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 3.75
Forward/backward pass size (MB): 2292.50
Params size (MB): 112.36
Estimated Total Size (MB): 2408.61
----------------------------------------------------------------

使用预训练的语义分割网络

PyTorch 提供了两类预训练好的网络,分别是 FCN ResNet101 和 DeepLabV3 ResNet101 。针对语义分割的分类器,需要输入图像使用相同的预处理方式,即先将每张图像的像素值预处理到 0 ∼ 1 0\sim1 01 之间,然后做标准化处理,使用均值为 [ 0.485 , 0.456 , 0.406 ] [0.485,0.456,0.406] [0.485,0.456,0.406] ,标准差为 [ 0.229 , 0.224 , 0.225 ] [0.229,0.224,0.225] [0.229,0.224,0.225]

预训练的模型在 COCO train2017 的子集上进行了预训练。

在 Pascal VOC(Pattern analysis statistical modelling and computational learning, Visual Object Class)数据集中存在 20 20 20 个类别和 1 1 1 个背景类。这 20 20 20 个背景类分为 4 4 4 个大类,分别是人、动物(鸟、猫、牛、马、羊)、交通工具(飞机、自行车、船、大巴、轿车、摩托车、火车)、室内物品(瓶子、椅子、餐桌、盆栽、沙发、显示器)等。

网络 描述
segmentation.fcn_resnet50() 具有ResNet-50结构的全卷积网络模型
segmentation.fcn_resnet101() 具有ResNet-101结构的全卷积网络模型
segmentation.deeplabv3_resnet50() 具有ResNet-50结构的DeepLab V3网络模型
segmentation.deeplabv3_resnet101() 具有ResNet-101结构的DeepLab V3网络模型

接下来以 segmentation.fcn_resnet101() 为例,介绍如何使用预训练好的网络结构来进行图像语义分割的任务。

model = torchvision.models.segmentation.fcn_resnet101(pretrained=True)
# 设置为验证模式
model.eval()
# 读取图片
image = PIL.Image.open('./data/VOC2012/JPEGImages/2007_007235.jpg')
# 照片预处理,转为0-1之间,图像标准化
img_transform = transforms.Compose([transforms.ToTensor(),
                                    transforms.Normalize(mean=[0.485,0.456,0.406],
                                                         std=[0.229,0.224,0.225])])
img_tensor = img_transform(image).unsqueeze(0)
img_tensor.shape
torch.Size([1, 3, 333, 500])
# 模型验证
output = model(img_tensor)['out']
# 输出转为二维图
output_arg = torch.argmax(output.squeeze(), dim=0).numpy()
output_arg.shape
(333, 500)

上述程序对一整幅图像的预测结果,只需要使用网络输出的 ‘out’ 对应的预测矩阵即可,该输出是个三维矩阵,用 argmax 转为二维矩阵。

该二维矩阵的每个取值均代表图像中对应像素点的预测类别。

为了更直观查看网络的图像分割结果,可以将像素值的每个预测类别分别编码为不同的颜色,然后可视化语义分割结果。

# 对输出进行编码
def decode_segmaps(image, label_colors, nc=21):
    r = np.zeros_like(image).astype(np.uint8)
    g = np.zeros_like(image).astype(np.uint8)
    b = np.zeros_like(image).astype(np.uint8)
    for cls in range(0, nc):
        idx = image == cls # 得到对应类别的特定颜色
        r[idx] = label_colors[cls, 0]
        g[idx] = label_colors[cls, 1]
        b[idx] = label_colors[cls, 2]
    rgbimage = np.stack([r, g, b], axis=2) # 合并颜色
    return rgbimage
label_colors = np.array([(0, 0, 0), # background
                        (128, 0, 0), (0, 128, 0), (128, 128, 0), (0, 0, 128), (128, 0, 128),
                        # aeroplane, bicycle, bird, boat, bottle
                        (0, 128, 128), (128, 128, 128), (64, 0, 0), (192, 0, 0), (64, 128, 0),
                        # bus, car, cat, chair, cow
                        (192, 128, 0), (64, 0, 128), (192, 0, 128), (64, 128, 128), (192, 128, 128),
                        # dining table, dog, horse, motorbike, person
                        (0, 64, 0), (128, 64, 0), (0, 192, 0), (128, 192, 0), (0, 64, 128)
                        # potted plant, sheep, sofa, train, tv/monitor
                        ])
output_rgb = decode_segmaps(output_arg, label_colors)
plt.figure(figsize=(20, 8))
plt.subplot(1, 2, 1)
plt.imshow(image)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(output_rgb)
plt.axis('off')
plt.subplots_adjust(wspace=0.05)
plt.show()


在这里插入图片描述

组合上面的代码过程如下(需要预定义 model 模型, img_transform 图像增强方式,和 decode_segmaps 语义分割结果解码方式):

image = PIL.Image.open('./data/VOC2012/JPEGImages/2007_007917.jpg')
img_tensor = img_transform(image).unsqueeze(0)
output = model(img_tensor)['out']
output_arg = torch.argmax(output.squeeze(), dim=0).numpy()
output_rgb = decode_segmaps(output_arg, label_colors)
plt.figure(figsize=(20, 8))
plt.subplot(1, 2, 1)
plt.imshow(image)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(output_rgb)
plt.axis('off')
plt.subplots_adjust(wspace=0.05)
plt.show()


在这里插入图片描述

输出三大模型结构

使用 torchviz 输出三大网络的结构图如图所示。

FCN
# 输出FCN网络结构
x = torch.randn(1, 3, 512, 640).requires_grad_(True) # 1个样本,大小为(3, 512, 640)
y = fcn8s.cpu()(x)
myFCN_vis = make_dot(y, params=dict(list(fcn8s.named_parameters()) + [('x', x)]))
# myFCN_vis.render('fcn_model', view=False) # 会自动保存为一个 espnet.pdf,第二个参数为True,则会自动打开该PDF文件,为False则不打开
myFCN_vis


在这里插入图片描述

U-Net
# 输出U-Net网络结构
x = torch.randn(1, 3, 512, 640).requires_grad_(True) # 1个样本,大小为(3, 512, 640)
y = unet(x)
myUNet_vis = make_dot(y, params=dict(list(unet.named_parameters()) + [('x', x)]))
# myUNet_vis.render('unet_model', view=False) # 会自动保存为一个 espnet.pdf,第二个参数为True,则会自动打开该PDF文件,为False则不打开
myUNet_vis


在这里插入图片描述

SegNet
# 输出SegNet网络结构
x = torch.randn(1, 3, 512, 640).requires_grad_(True) # 1个样本,大小为(3, 512, 640)
y = segnet.cpu()(x)
mySegNet_vis = make_dot(y, params=dict(list(segnet.named_parameters()) + [('x', x)]))
# mySegNet_vis.render('segnet_model', view=False) # 会自动保存为一个 espnet.pdf,第二个参数为True,则会自动打开该PDF文件,为False则不打开
mySegNet_vis


在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_44979150/article/details/123649249