Add more devices to `torchbenchmark.util.experiment.instantiator.list_devices()` (#2545)
Summary:
from https://github.com/pytorch/benchmark/issues/2543#issuecomment-2487216599
This change will allow all userbenchmarks to run on available devices.
## Userbenchmark - test_bench - BERT_pytorch
cuda:
```
$ python run_benchmark.py test_bench --models BERT_pytorch --device cuda
Running TorchBenchModelConfig(name='BERT_pytorch', test='eval', device='cuda', batch_size=None, extra_args=[], extra_env=None, output_dir=None) ... [done]
{
"name": "test_bench",
"environ": {
"pytorch_git_version": "ac47a2d9714278889923ddd40e4210d242d8d4ee",
"pytorch_version": "2.6.0.dev20241121+cu124",
"device": "Tesla T4"
},
"metrics": {
"model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=[], metric=latencies": 122.69141,
"model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=[], metric=cpu_peak_mem": 0.6962890625,
"model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=[], metric=gpu_peak_mem": 1.573486328125
}
}
```
mps:
```
$ python run_benchmark.py test_bench --models BERT_pytorch --device mps
Running TorchBenchModelConfig(name='BERT_pytorch', test='eval', device='mps', batch_size=None, extra_args=[], extra_env=None, output_dir=None) ... [done]
{
"name": "test_bench",
"environ": {
"pytorch_git_version": "dd2e6d61409aac22198ec771560a38adb0018ba2",
"pytorch_version": "2.6.0.dev20241120"
},
"metrics": {
"model=BERT_pytorch, test=eval, device=mps, bs=None, extra_args=[], metric=latencies": 133.299,
"model=BERT_pytorch, test=eval, device=mps, bs=None, extra_args=[], metric=cpu_peak_mem": 19.832832,
"model=BERT_pytorch, test=eval, device=mps, bs=None, extra_args=[], metric=gpu_peak_mem": "failed"
}
}
```
ascend npu:
```
python run_benchmark.py test_bench --models BERT_pytorch --device npu
Running TorchBenchModelConfig(name='BERT_pytorch', test='eval', device='npu', batch_size=None, extra_args=[], extra_env=None, output_dir=None) ... [done]
{
"name": "test_bench",
"environ": {
"pytorch_git_version": "64141411e0de61b61857e216ae7a8766f4f5969b",
"pytorch_version": "2.6.0.dev20240923"
},
"metrics": {
"model=BERT_pytorch, test=eval, device=npu, bs=None, extra_args=[], metric=latencies": 21.688104,
"model=BERT_pytorch, test=eval, device=npu, bs=None, extra_args=[], metric=cpu_peak_mem": 47.261696,
"model=BERT_pytorch, test=eval, device=npu, bs=None, extra_args=[], metric=gpu_peak_mem": "failed"
}
}
```
cc: xuzhao9 jgong5 FFFrog
Pull Request resolved: https://github.com/pytorch/benchmark/pull/2545
Reviewed By: xuzhao9
Differential Revision: D66457386
Pulled By: FindHao
fbshipit-source-id: 0f3a8aba97a2cb2efc3f77f01bcd28cfc7182e0b