pytorch
18e4a466 - fix amp in inference in benchmarking suite (#103220)

Commit
1 year ago
fix amp in inference in benchmarking suite (#103220) Even if you passed in --amp we would run inference in float32. `AlbertForMaskedLM` goes from 1.305 float32 to 1.724x amp, and then again to 1.910x with freezing. Benchmark numbers for amp are about to go way up lol. Pull Request resolved: https://github.com/pytorch/pytorch/pull/103220 Approved by: https://github.com/desertfire
Author
Committer
Parents
Loading