lmflow.optim.adamp ================== .. py:module:: lmflow.optim.adamp Classes ------- .. autoapisummary:: lmflow.optim.adamp.AdamP Module Contents --------------- .. py:class:: AdamP(params, lr: float = 0.001, betas=(0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0, delta: float = 0.1, wd_ratio: float = 0.1, nesterov: bool = False) Bases: :py:obj:`torch.optim.optimizer.Optimizer` Implements AdamP algorithm. It has been proposed in `Slowing Down the Weight Norm Increase in Momentum-based Optimizers` https://arxiv.org/abs/2006.08217 Note: Reference code: https://github.com/clovaai/AdamP .. !! processed by numpydoc !! .. py:method:: _channel_view(x) :staticmethod: .. py:method:: _layer_view(x) :staticmethod: .. py:method:: _cosine_similarity(x, y, eps, view_func) :staticmethod: .. py:method:: _projection(p, grad, perturb, delta, wd_ratio, eps) .. py:method:: step(closure=None) Performs a single optimization step. Arguments: closure: A closure that reevaluates the model and returns the loss. .. !! processed by numpydoc !!