method one
- Set the requirements_grad of the part of the parameters that need to be fixed to False.
- Add filter to the optimizer to filter according to requirements_grad.
# requires_grad置为False
for p in net.XXX.parameters():
p.requires_grad = False
# filter
optimizer.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=1e-3)
Method Two
This method only needs to put the layers that need to be frozen under torch.no_grad() in the forward method in the network definition. This method is strongly promoted.
class xxnet(nn.Module):
def __init__():
....
self.layer1 = xx
self.layer2 = xx
self.fc = xx
def forward(self.x):
with torch.no_grad():
x = self.layer1(x)
x = self.layer2(x)
x = self.fc(x)
return x