pytorch uses cnn_finetune to call pre-trained models

1: First install cnn_finetune
pip install cnn_finetune
2: Create a resnet18 model trained with ImageNet and use its weight for 10 classification
model = make_model ('resnet18', num_classes = 10, pretrained = True)
3: Create a model, And set Dropout
model = make_model ('nasnetalarge', num_classes = 10, pretrained = True, dropout_p = 0.5)
3: Change the model global maximum pooling to global average pooling
model = make_model ('inceptionresnetv2', num_classes = 10, pretrained = True, pool = nn.AdaptiveMaxPool2d (1))
4: Because the VGG and AlexNet models use fully connected layers, the input size needs to be fixed, for example:
model = make_model ('vgg16', num_classes = 10, pretrained = True, input_size = (256, 256))
5: Create a VGG16 model, enter the image size as 256x256, and use a custom classifier:
import torch.nn as nn

def make_classifier(in_features, num_classes):
return nn.Sequential(
nn.Linear(in_features, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)

model = make_model ('vgg16', num_classes = 10, pretrained = True, input_size = (256, 256), classifier_factory = make_classifier)
6: Show the pre-processing of the
model when training on ImageNet: model = make_model ('resnext101_64x4d', num_classes = 10, pretrained = True)
print (model.original_model_info)
print (model.original_model_info.mean)
[0.485, 0.456, 0.406]

Published 36 original articles · won praise 1 · views 6384

Guess you like

Origin blog.csdn.net/qq_34291583/article/details/103276081