PyTorch中几种优化方法的实现(提供代码)

本文仅提供实现的方法,原理的话可以找一本相关书籍看看。下面提供几种常见的优化方法:

1、SGD:

torch.optim.SGD(params, lr=<required parameter>, momentum=0, dampening=0, weight_decay=0, nesterov=False)

参数解释:

  1. params (iterable):iterable of parameters to optimize or dicts defining parameter groups
  2. lr (float):learning rate - momentum (float, optional):momentum factor (default: 0)
  3. weight_decay (float, optional):weight decay (L2 penalty) (default: 0)即L2regularization,选择一个合适的权重衰减系数λ非常重要,这个需要根据具体的情况去尝试,初步尝试可以使用 1e-4或者 1e-3
  4. dampening (float, optional):dampening for momentum (default: 0) - nesterov (bool, optional):enables Nesterov momentum (default: False)

2、AdaGrad

torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0)

参数解释:
1.params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
2.lr (float, optional) – learning rate (default: 1e-2)
3.lr_decay (float, optional) – learning rate decay (default: 0)
4.weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)

3、RMSProp

torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False)

参数解释:
1.params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
2.lr (float, optional) – learning rate (default: 1e-2)
3.momentum (float, optional) – momentum factor (default: 0)
4.alpha (float, optional) – smoothing constant (default: 0.99)
5.eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8) 6.centered (bool, optional) – if True, compute the centered RMSProp, the gradient is normalized by an estimation of its variance
7.weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)

4、Adam

torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)

参数解释:
1.params (iterable) – iterable of parameters to optimize or dicts defining parameter groups
2.lr (float, optional) – learning rate (default: 1e-3)
3.betas (Tuple[float, float], optional) – coefficients used for computing running averages of gradient and its square (default: (0.9, 0.999))
4.eps (float, optional) – term added to the denominator to improve numerical stability (default: 1e-8) 5.weight_decay (float, optional) – weight decay (L2 penalty) (default: 0)
6.amsgrad (boolean, optional) – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond (default: False)

opt_SGD = torch.optim.SGD(net_SGD.parameters(),lr=Learning_rate)
opt_Momentum = torch.optim.SGD(net_Momentum.parameters(),lr=Learning_rate,momentum=0.8,nesterov=True)
opt_RMSprop = torch.optim.RMSprop(net_RMSprop.parameters(),lr=Learning_rate,alpha=0.9)
opt_Adam = torch.optim.Adam(net_Adam.parameters(),lr=Learning_rate,betas=(0.9,0.99))
opt_Adagrad = torch.optim.Adagrad(net_Adagrad.parameters(),lr=Learning_rate)
发布了61 篇原创文章 · 获赞 10 · 访问量 2920

猜你喜欢

转载自blog.csdn.net/weixin_42042056/article/details/105473569
今日推荐