lmflow.optim.radam#

Classes#

RAdam

Implements RAdam optimization algorithm.

Module Contents#

class lmflow.optim.radam.RAdam(params, lr: float = 0.001, betas=(0.9, 0.999), eps: float = 1e-08, weight_decay: float = 0)[source]#

Bases: torch.optim.optimizer.Optimizer

Implements RAdam optimization algorithm.

Note:

Deprecated, please use version provided by PyTorch_.

It has been proposed in On the Variance of the Adaptive Learning Rate and Beyond. https://arxiv.org/abs/1908.03265

Note:

Reference code: LiyuanLucasLiu/RAdam

__setstate__(state)[source]#
step(closure=None)[source]#

Performs a single optimization step.

Arguments:

closure: A closure that reevaluates the model and returns the loss.