pytorch下nn模块总结03

pytorch的nn模块下常用方法的总结

(1).add_module(name,module) #用来添加模块到网络的结构中
self.add_module(“conv1”,nn.Conv2d(10,20,4))
(2).children() #返回当前网络模型的子模块的迭代器
.named_children() #

.modules() #返回的是当前网络包含的所有子模块的迭代器
.named_modules() #
(3).eval() #将模型设置为evaluation模式,仅对Dropout 和BN层起作用
.train()
(4).zero_grad() #清空tensor的梯度,在每次更新参数之前需要此操作
(5).torch.nn.Squential(args) #一个顺序容器,按照顺序将模块(也可以是一个order dict)加入到容器中,用来搭建网络
(6)nn.ModelList(modules=Nole) #将module像列表一样索引
(7).append(module)
(8).extend(module)

---------以下卷积,池化,激活函数相关----------——
–注意torch中的数据格式 N C H W 分别代表 批量数-通道数-图像高-图像宽
(1)nn.Conv1d(in_channels,out_channels,kernel_size,strid=1,padding=0,dilate=1,groups=1,bias=True)
nn.Conv2d(in_channels,out_channels,kernel_size,strid=1,padding=0,dilate=1,groups=1,bias=True)
nn.Conv3d(in_channels,out_channels,kernel_size,strid=1,padding=0,dilate=1,groups=1,bias=True)
(2) nn.ConvTranspose1d(in_channel,out_channel,kernel_size,stride=1,padding=0,output_padding=0,groups=1,bias=True)
nn.ConvTranspose2d(in_channel,out_channel,kernel_size,stride=1,padding=0,output_padding=0,groups=1,bias=True)
nn.ConvTranspose3d(in_channel,out_channel,kernel_size,stride=1,padding=0,output_padding=0,groups=1,bias=True)

(3) nn.MaxPool1d(kernel_size,stride=None,padding=0,dilate=1,return_index=False,ceil_mode=False)
nn.MaxPool2d(kernel_size,stride=None,padding=0,dilate=1,return_index=False,ceil_mode=False)
nn.MaxPool3d(kernel_size,stride=None,padding=0,dilate=1,return_index=False,ceil_mode=False)
(4) nn.MaxUnpool1d(kernel_size,stride=None,padding=0)
nn.MaxUnpool2d(kernel_size,stride=None,padding=0)
nn.MaxUnpool3d(kernel_size,stride=None,padding=0)
(5) nn.AvgPool1d(kernel_size,stride=None,padding=0,ceil_mode=False,count_include_pad=True)
nn.AvgPool1d(kernel_size,stride=None,padding=0,ceil_mode=False,count_include_pad=True)
nn.AvgPool1d(kernel_size,stride=None,padding=0,ceil_mode=False,count_include_pad=True)
(6) nn.LPPool2d(norm_type,kernel_size,stride=None,ceil_mode=False) #幂平局池化

(7)自适应池化:对于任何大小的输入,都可以输出到指定尺寸,
nn.AdaptiveMaxPool1d(output_size,return_indices=False)
nn.AdaptiveMaxPool2d(output_size,return_indices=False)

nn.AdaptiveAvgPool1d(output_size)
nn.AdaptiveAvgPool2d(output_size)

(8) nn.ReLU(inplace=False)
nn.ReLU6(inplace=False)
nn.ELU(alpha=1.0,inplace=False)
nn.PReLU(num_parameters=1,init=0.25)
nn.LeakyReLU(negative_scope=0.01,inplace=False)
(9) nn.Threshold(threshold,value,inplace=False)
(10)nn.Sigmoind(inplace=False)
nn.LogSigmod()
(11)nn.Tanh()
(12)nn.Hardtanh(minValue=-1,maxValue=1,inplace=False)
(13)nn.Softplus(beta=1,threshold=20)
(14)nn.Softshrink(lambda=0.5)
(15)nnn.SoftSign()
‘’’’


(16)nn.BatchNorm1d(num_features,eps=1e-5,momentum=0.1,affine=true)
nn.BatchNorm2d(num_features,eps=1e-5,momentum=0.1,affine=true)
nn.BatchNorm3d(num_features,eps=1e-5,momentum=0.1,affine=true)

(17)nn.Linear(in_features,out_features,bias=True)
(18)nn.Dropout(p=0.5,inplace=False)
nn.Dropout2d(p=0.5,inplace=False)
nn.Dropout3d(p=0.5,inplace=False)


(19)nn.L1Loss(size_average=True)
(20)nn.MSELoss(size_average=True)
(21)nn.CrossEntropyLoss(weight=None,size_average=True)
(22)nn.NLLLoss(weight=None,size_average=True)
nn.NLLLoss2d(weight=None,size_average=True)
(23)nn.KLDivLoss(weight=None,size_average=True)
(24)nn.BCELoss(weight=None,size_average=True)

猜你喜欢

转载自blog.csdn.net/weixin_44493916/article/details/90024037