pytorch
aaa1e076 - Smart Decay for Adam - Caffe2 (#61488)

Commit
3 years ago
Smart Decay for Adam - Caffe2 (#61488) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/61488 We want to decay learning parameters properly. Previously this was not done when a parameter is absent from a minibatch. We fix this by keeping track of missed minibatches and making decay catch up accordingly. The exponential moving averages (EMA) for the first and second moments used in Adam are updated only for parameters seen in a minibatch. Actually, for these parameters, 0 should be added to the EMAs and the EMAs should then be decayed by multiplying by beta1 and beta2 respectively. To avoid the computational overhead of touching every parameter for every minibatch, we: * keep track of the last time a parameter is seen * instead of decaying the EMAs by multiplying by beta1 and beta2, we multiply by beta1^k and beta2^k, where k is the number of minibatches since the parameter was last seen. Differential Revision: D27978269 fbshipit-source-id: e47524101ddfcb281c46c505b9b7a8f0835bc64a
Author
Jamie King
Parents
Loading