Fix PyTorch CI HUD dashboard missing perf numbers: hf_Whisper (#1935)
Summary:
A few models were passing accuracy check, but surprisingly failing the perf run, resulting in dashboard entries like:
<img width="1696" alt="image" src="https://github.com/pytorch/benchmark/assets/9547562/eb0be16e-7785-486d-a362-322146a97423">
Reproing the hud's commands locally,
```
# pass
python benchmarks/dynamo/torchbench.py --accuracy --no-translation-validation --training --amp --backend inductor --disable-cudagraphs --device cuda --total-partitions 4 --partition-id 1 --output hf_Whisper_accuracy.csv --only hf_Whisper
# fail (on https://github.com/pytorch/benchmark/blob/4ea3bba3b8010f5d4a629bb8f530a92570f34518/torchbenchmark/util/model.py#L195C48-L195C48)
python benchmarks/dynamo/torchbench.py --performance --cold-start-latency --training --amp --backend inductor --disable-cudagraphs --device cuda --total-partitions 4 --partition-id 1 --output hf_Whisper_perf.csv --only hf_Whisper
```
The error suggests that hf_Whisper does not provide a batch size for the training mode perf run.
Summarizing discussion with xuzhao9:
> I think we could:
> 1. set a default train batch size for hf_Whisper, if you still want to test forward/backward pass without a defined train test
> 2. in model.py, make sure self.batch_size is not None (before accuracy check overrides batch size to 4)
I implement 1, we set default batch sizes in the parent class of all benchmark models, with ability to be overwritten by individual models.
Pull Request resolved: https://github.com/pytorch/benchmark/pull/1935
Reviewed By: xuzhao9
Differential Revision: D49641235
Pulled By: xmfan
fbshipit-source-id: 2f93fb742846d7c34936cbbc8e8d3e22c5a76662