lmflow.optim.adamp#
Classes#
Implements AdamP algorithm. |
Module Contents#
- class lmflow.optim.adamp.AdamP(params, lr: float = 0.001, betas=(0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0, delta: float = 0.1, wd_ratio: float = 0.1, nesterov: bool = False)[source]#
Bases:
torch.optim.optimizer.Optimizer
Implements AdamP algorithm.
It has been proposed in Slowing Down the Weight Norm Increase in Momentum-based Optimizers https://arxiv.org/abs/2006.08217
- Note:
Reference code: clovaai/AdamP