Fix a bug in test_bench (#2069)
Summary:
Bugfix
Pull Request resolved: https://github.com/pytorch/benchmark/pull/2069
Test Plan:
```
$ python run_benchmark.py test_bench -m BERT_pytorch -d cuda -t train,eval --backend torchscript
Running TorchBenchModelConfig(name='BERT_pytorch', test='train', device='cuda', batch_size=None, extra_args=['--backend', 'torchscript'], extra_env=None, output_dir=None) ... [done]
Running TorchBenchModelConfig(name='BERT_pytorch', test='eval', device='cuda', batch_size=None, extra_args=['--backend', 'torchscript'], extra_env=None, output_dir=None) ... [done]
```
```
{
"name": "test_bench",
"environ": {
"pytorch_git_version": "b2f25d6342ed483b461e831c6f970ae59a4fcca2",
"pytorch_version": "2.2.0.dev20231127+cu121",
"device": "NVIDIA A100-PG509-200"
},
"metrics": {
"model=BERT_pytorch, test=train, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=latencies": 284.174904,
"model=BERT_pytorch, test=train, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=cpu_peak_mem": 8.958984375,
"model=BERT_pytorch, test=train, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=gpu_peak_mem": 7.0191650390625,
"model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=latencies": 169.736414,
"model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=cpu_peak_mem": 2.6162109375,
"model=BERT_pytorch, test=eval, device=cuda, bs=None, extra_args=['--backend', 'torchscript'], metric=gpu_peak_mem": 4.2965087890625
}
}
```
Reviewed By: aaronenyeshi
Differential Revision: D51711874
Pulled By: xuzhao9
fbshipit-source-id: 6597bedd23b4b8b5b0e2a58ac403cdbea62a2dbc