pytorch uses convolution to represent average pooling

We know that the function of average pooling is to slide the pooling window of kernel_size size on the feature map. The values ​​of the feature map covered by the pooling window are added and averaged as the value of the coverage area. If we use the convolution operation to represent average pooling, the weight of the convolution can be expressed as (assuming two-dimensional data, kernel_size=3):
Insert image description here
Note: At this time, the weight of the convolution kernel is no longer a trainable parameter.

Code:

# 定义卷积核的权重参数
def define_Conv_to_Avg2d(in_channel,out_channel,kernel_size):
    if isinstance (kernel_size,int):
        weight=torch.ones((in_channel,out_channel,kernel_size,kernel_size))
        xs=kernel_size*kernel_size
        weight=weight/xs
        
    elif isinstance (kernel_size,tuple) and len(kernel_size)==2:
        weight=torch.ones((in_channel,out_channel,kernel_size[0],kernel_size[1]))
        xs=kernel_size[0]*kernel_size[1]
        weight=weight/xs
        
    else:
        print('kernel_size size error!')
    
    return weight
Pi = nn.Conv3d(in_channel,out_channel,kernel_size=3,stride=2, padding=1, bias=False)

# 因为池化不改变通道大小,所以这里in_channel=out_channel
Pi.weight=torch.nn.Parameter(define_Conv_to_Avg2d(in_channel,out_channel,kernel_size=3),requires_grad=False)

Guess you like

Origin blog.csdn.net/qq_44846512/article/details/113655687