【PyTorch】教程:torch.nn.ConvTranspose2d

torch.nn.ConvTranspose2d

CLASS torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)

2D 反卷积;

该模块可以被视为 Conv2d 相对于其输入的梯度,也被称为分数跨步卷积或反卷积(尽管不是真正意义的反卷积,因为不计算卷积的真逆)。更多见 here 论文Deconvolutional Networks

参数

  • in_channels ([int]) – 输入通道数
  • out_channels ([int]) – 输出通道数
  • kernel_size ([int] or [tuple]) – 卷积核大小
  • stride ([int] or [tuple], optional ) – 步长,默认为1
  • padding ([int] or [tuple], optional ) – zero-padding添加到输入的每个维度上,默认为0
  • output_padding ([int] or [tuple], optional ) – 输出扩边,默认为0
  • groups ([int], optional ) – 组别数,默认为1
  • bias ([bool], optional ) – 如果为 True, 添加 bias 学习, 默认为 True
  • dilation ([int], [tuple], optional ) – kernel 元素之间的空格数,默认为1

shape

  • input: ( N , C i n , H i n , W i n ) (N, C_{in}, H_{in}, W_{in}) (N,Cin,Hin,Win) ( C i n , H i n , W i n ) (C_{in}, H_{in}, W_{in}) (Cin,Hin,Win)
  • output: ( N , C o u t , H o u t , W o u t ) (N, C_{out}, H_{out}, W_{out}) (N,Cout,Hout,Wout) ( C o u t , H o u t , W o u t ) (C_{out}, H_{out}, W_{out}) (Cout,Hout,Wout)

其中

H o u t = ( H i n − 1 ) ∗ s t r i d e [ 0 ] − 2 ∗ p a d d i n g [ 0 ] + d i l a t i o n [ 0 ] ∗ ( k e r n e l s i z e [ 0 ] − 1 ) + o u t p u t p a d d i n g [ 0 ] + 1 H_{out} = (H_{in}-1)*stride[0]-2*padding[0]+dilation[0]*(kernelsize[0]-1)+outputpadding[0]+1 Hout=(Hin1)stride[0]2padding[0]+dilation[0](kernelsize[0]1)+outputpadding[0]+1

W o u t = ( W i n − 1 ) ∗ s t r i d e [ 1 ] − 2 ∗ p a d d i n g [ 1 ] + d i l a t i o n [ 1 ] ∗ ( k e r n e l s i z e [ 1 ] − 1 ) + o u t p u t p a d d i n g [ 1 ] + 1 W_{out} = (W_{in}-1)*stride[1]-2*padding[1]+dilation[1]*(kernelsize[1]-1)+outputpadding[1]+1 Wout=(Win1)stride[1]2padding[1]+dilation[1](kernelsize[1]1)+outputpadding[1]+1

变量

  • weight ([Tensor]) – 模块的可学习的权重参数,( in_channels , out_channels groups , kernel_size[0] , kernel_size[1] ) \text{in\_channels}, \frac{\text{out\_channels}}{\text{groups}},\text{kernel\_size[0]}, \text{kernel\_size[1]}) in_channels,groupsout_channels,kernel_size[0],kernel_size[1]). 这些权重值采样来自于 U ( − k , k ) \mathcal{U}(-\sqrt{k}, \sqrt{k}) U(k ,k ) 其中, k = g r o u p s C out ∗ ∏ i = 0 1 kernel_size [ i ] k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel\_size}[i]} k=Couti=01kernel_size[i]groups
  • bias ([Tensor] ) – 模块的可学习的权重参数, U ( − k , k ) \mathcal{U}(-\sqrt{k}, \sqrt{k}) U(k ,k ), 其中 k = g r o u p s C out ∗ ∏ i = 0 1 kernel_size [ i ] k = \frac{groups}{C_\text{out} * \prod_{i=0}^{1}\text{kernel\_size}[i]} k=Couti=01kernel_size[i]groups
# torch.nn.ConvTranspose2d
import torch
import torch.nn as nn

# With square kernels and equal stride
m = nn.ConvTranspose2d(16, 33, 3, stride=2)
# non-square kernels and unequal stride and with padding
m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
input = torch.randn(20, 16, 50, 100)
output = m(input)
print("output.size: ", output.size()) # output.size:  torch.Size([20, 33, 93, 100])

# exact output size can be also specified as an argument
input = torch.randn(1, 16, 12, 12)
downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1)
upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1)
h = downsample(input)
print("h.size(): ", h.size()) # h.size():  torch.Size([1, 16, 6, 6])

output = upsample(h, output_size=input.size())
print("output.size(): ", output.size()) # output.size():  torch.Size([1, 16, 12, 12]) 

【参考】

ConvTranspose2d — PyTorch 1.13 documentation

猜你喜欢

转载自blog.csdn.net/zhoujinwang/article/details/129289908