Pytorch可能有用的代码合集

版权声明:要转随便转,如果能加上原文的链接就感谢各位了。( ⊙ o ⊙ ) https://blog.csdn.net/Hungryof/article/details/88527357

总说

记录一些比较有用的pytorch代码(有些是自己写的, 有些是从网上看到的)

目录

  • 提取网络特征(适用于sequential构建的网络)
  • 修改Pretrained的网络(如ResNet等)

提取网络特征(适用于sequential构建的网络)

class VGG16FeatureExtractor(nn.Module):
    def __init__(self):
        super().__init__()
        vgg16 = models.vgg16(pretrained=True)
        self.enc_1 = nn.Sequential(*vgg16.features[:5])
        self.enc_2 = nn.Sequential(*vgg16.features[5:10])
        self.enc_3 = nn.Sequential(*vgg16.features[10:17])

        # fix the encoder
        for i in range(3):
            # 这种写法挺好的啊!!!!!!!!!!!
            for param in getattr(self, 'enc_{:d}'.format(i + 1)).parameters():
                param.requires_grad = False

    def forward(self, image):
        results = [image]
        for i in range(3):
            func = getattr(self, 'enc_{:d}'.format(i + 1))
            results.append(func(results[-1]))
return results[1:]

修改Pretrained的网络(如ResNet等)

当你要稍微改改已有的大网络时, 其实可以直接将github中torchvision的相关文件复制, 比如改进ResNet, 先复制 resnet.py ,
这种大的网络, 很可能并不是单纯用一个Sequential构建的, 所以没法简单的用上面的方法进行. 里面的网络有多个属性构建, 每个属性都是一个block

class ResNet(nn.Module):

    def __init__(self, block, layers, num_classes=1000, zero_init_residual=False):
        super(ResNet, self).__init__()
        self.inplanes = 64
        self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,
                               bias=False)
        self.bn1 = nn.BatchNorm2d(64)
        self.relu = nn.ReLU(inplace=True)
        self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        self.layer1 = self._make_layer(block, 64, layers[0])
        # self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        # self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        # self.layer4 = self._make_layer(block, 512, layers[3], stride=2)
        # self.avgpool = nn.AdaptiveAvgPool2d((1, 1))
        # self.fc = nn.Linear(512 * block.expansion, num_classes)

	...
	
	def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)
        x = self.maxpool(x)

        x = self.layer1(x)
        # x = self.layer2(x)
        # x = self.layer3(x)
        # x = self.layer4(x)

        # x = self.avgpool(x)
        # x = x.view(x.size(0), -1)
        # x = self.fc(x)

        return x

比如, 拿到ResNet前面10层的卷积, 这样改就行了. 然后在其他文件,

from .resnet import resnet101

...
class ResBase(nn.Module):
    def __init__(self):
        super(ResBase, self).__init__()
        # front_end has been truncated till conv2_3
        self.front_end = resnet101(pretrained=True)
        self.max_pool = torch.nn.MaxPool2d(3, stride=2, padding=1, dilation=1, ceil_mode=False)
        # Add some layers you want
        self.back_end = torch.nn.Sequential(
            torch.nn.Conv2d(256, 128, 3, dilation=1, padding=1),
            torch.nn.ReLU(),
            torch.nn.Conv2d(128, 64, 3, dilation=1, padding=1),
            torch.nn.ReLU(),
            torch.nn.Conv2d(64, 3, 3, dilation=1, padding=1)
        )
    
    def forward(self, x):
        x = self.front_end(x)
        x = self.max_pool(x)
        return self.back_end(x)

    # reinit front_end, especially for resnet.
    def _initialize_weights(self):
        self.front_end = resnet101(pretrained=True)

**稍微注意一下, 这样构建的网络, 基本是 frond_end用原始weights进行, 再后面的back_end用随机初始化. **
所以, 先整个网络进行随机初始化, 再调用resnet._initialize_weights()就行.

猜你喜欢

转载自blog.csdn.net/Hungryof/article/details/88527357
今日推荐