lmflow.optim.adagrad#

Classes#

Module Contents#

class lmflow.optim.adagrad.AdaGrad(params, lr=0.001, eps=1e-08, weight_decay=0)[source]#

Bases: torch.optim.Optimizer

step(closure=None)[source]#