『UNet』UNet++ learning

1. Network structure

Insert picture description here
UNet ++ goal is added between the encoder and the decoder Dense block and convolutional layer to improve the accuracy of segmentation.

UNet++ adds 3 things to the original U-Net:

  1. Redesigned jump path (shown in green): to make up for the semantic difference between encoder and decoder sub-paths
  2. Dense skip connection (shown in blue)
  3. In-depth supervision (shown in red)

1. Redesigned jumping path

In UNet++, a redesigned jump path (shown in green) is added to make up for the semantic difference between the encoder and decoder sub-paths.

The purpose of these convolutional layers is to reduce the semantic gap between the feature maps of the encoder and decoder sub-networks. Therefore, this may be a more direct optimization problem for the optimizer.

U-Net uses skip connections to directly connect the feature maps between the encoder and decoder, resulting in the fusion of semantically dissimilar feature maps. However, in UNet++, the output of the previous convolutional layer of the same dense block is fused with the upsampled output corresponding to the lower dense block. This makes the semantic level of the encoded feature closer to the semantic level of the feature map waiting in the decoder, so when a semantically similar feature map is received, optimization is easier.

2. Dense skip connection

In UNet++, dense skip connections (shown in blue) implement a skip path between encoder and decoder. These Dense blocks are inspired by DenseNet, the purpose is to improve the segmentation accuracy and improve the gradient flow.

Dense jump connection ensures that all prior feature maps are accumulated and reach the current node through dense convolution blocks on each jump path. This will generate full resolution feature maps at multiple semantic levels.

Guess you like

Origin blog.csdn.net/libo1004/article/details/111031996