Tips: For some open source codes, there is no code for calculating network parameters and calculations. Here is a general method to obtain the parameter amount and calculation amount of the network. Use thop to get it quickly
1 Model parameters and computation
Amount of parameters #paramsThat is, how many parameters are included in the network model, which has nothing to do with the input data, but mainly has a relationship with the structure of the model; its main influence on the model operation is the required memory or video memory
Computation Amount #FLOPsFLOPs (Floating point operations, the number of floating point operations) are usually used to represent the amount of calculation, which mainly measures the complexity of the algorithm/model. In the paper, GFLOPs are generally used to represent, 1GFLOPs=10^9 FLOPs;
2 install thop
pip install thop
3 sample code
#### (such as calculating the parameter amount and calculation amount of the following network) ####
import torch
import torch.nn as nn
# SENet
class SELayer(nn.Module):
def __init__(self, channel, reduction=16):
super(SELayer, self).__init__()
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.fc = nn.Sequential(
nn.Linear(channel, channel // reduction, bias=False),
nn.ReLU(inplace=True),
nn.Linear(channel // reduction, channel, bias=False),
nn.Sigmoid()
)
def forward(self, x):
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
y = self.fc(y).view(b, c, 1, 1)
return x * y.expand_as(x)
#### (thop code, you can optimize it by yourself) ####
import thop
if __name__ == '__main__':
# 输入1 channel
model = SELayer(channel=64)
# (1, 64, 640, 640) 输入的图片尺寸
x = torch.randn(1, 64, 640, 640)
flops, params = thop.profile(model, inputs=(x,))
print("%s | %s | %s" % ("Model", "Params(M)", "FLOPs(G)"))
print("------|-----------|------")
print("%s | %.7f | %.7f" % ("模型 ", params / (1000 ** 2), flops / (1000 ** 3)))