lmflow.optim.sgdp#
Classes#
Implements SGDP algorithm. |
Module Contents#
- class lmflow.optim.sgdp.SGDP(params, lr: float = 0.001, momentum: float = 0, dampening: float = 0, eps: float = 1e-08, weight_decay: float = 0, delta: float = 0.1, wd_ratio: float = 0.1, nesterov: bool = False)[source]#
Bases:
torch.optim.optimizer.Optimizer
Implements SGDP algorithm.
It has been proposed in Slowing Down the Weight Norm Increase in Momentum-based Optimizers. https://arxiv.org/abs/2006.08217
- Note:
Reference code: clovaai/AdamP