Set different learning rates for different layers

When using a pre-trained model, you may need to set
(1) the parameter learning rate of the pre-trained backbone toSmaller value,
(2) For parts other than backbone, you need to uselargerlearning rate.

from collections import OrderedDict
import torch.nn as nn
import torch.optim as optim

net = nn.Sequential(OrderedDict([
    ("linear1", nn.Linear(10, 20)),
    ("linear2", nn.Linear(20, 30)),
    ("linear3", nn.Linear(30, 40))]))


linear3_params = list(map(id, net.linear3.parameters()))
base_params = filter(lambda p: id(p) not in linear3_params, net.parameters())

optimizer = optim.SGD([
    {
    
    'params': base_params},
    {
    
    'params': net.linear3.parameters(), 'lr': 0.0005}],
    lr=0.001, momentum=0.9)


print(optimizer)
print(optimizer.param_groups[0]['lr'])
print(optimizer.param_groups[1]['lr'])

Guess you like

Origin blog.csdn.net/weixin_37804469/article/details/133146846
Recommended