Dynamo option to disable() optimizer.step (#1308)
Summary:
Pull Request resolved: https://github.com/pytorch/benchmark/pull/1308
Previously was working around this with `torch._dynamo.disable(self.optimizer.step)()` in the model_factory.py code, but we probably don't want to land that for non-dynamo models. I think this does the same thing, but guarded with a flag.
Test:
```
ADAM_CAPTURABLE=1 python run.py hf_T5_large -t train -d cuda --torchdynamo inductor --torchinductor_cudagraph False --dynamo_disable_optimizer_step True
```
^ takes ~5 minutes, which is a lot slower than the ~30min expected when the optimizer gets compiled.
Test Plan: Imported from OSS
Reviewed By: wconstab
Differential Revision: D41326442
Pulled By: davidberard98
fbshipit-source-id: 71155193d50ff9df6a4c09da06d5b5c989259dd5