benchmark
0e1a1885 - Fix eval batch size on Nvidia A100 40GB (#1078)

Commit
3 years ago
Fix eval batch size on Nvidia A100 40GB (#1078) Summary: Use `torch.cuda.device_name()` to specify the eval batch size on GPU devices. Pull Request resolved: https://github.com/pytorch/benchmark/pull/1078 Reviewed By: FindHao Differential Revision: D38362222 Pulled By: xuzhao9 fbshipit-source-id: 224fb18d8dcfc68374b3df333ca56acf86647663
Author
Parents
Loading