pytorch
2b94c6e0 - [optim] Adam defaults to fused/foreach when on CUDA + differentiable=False

Commit
2 years ago
[optim] Adam defaults to fused/foreach when on CUDA + differentiable=False
Author
Committer
Parents
Loading