Fix eval batch size on Nvidia A100 40GB (#1078)
Summary:
Use `torch.cuda.device_name()` to specify the eval batch size on GPU devices.
Pull Request resolved: https://github.com/pytorch/benchmark/pull/1078
Reviewed By: FindHao
Differential Revision: D38362222
Pulled By: xuzhao9
fbshipit-source-id: 224fb18d8dcfc68374b3df333ca56acf86647663