dbr quant overhead[7/x]: speed up AutoQuantizationState.reset_to_new_call (#68372)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68372
Speeds up `AutoQuantizationState.reset_to_new_call` by going around
the getattr and setattr overhead in `torch.nn.Module`.
Test Plan:
```
// MobileNetV2, 1x3x224x224 input, % of time spent by function during DBR convert
// before
reset_to_new_call - 1.09%
// after
reset_to_new_call - 0.18%
```
Reviewed By: jerryzh168
Differential Revision: D32463759
Pulled By: vkuzo
fbshipit-source-id: f3faa464372b0703f7d246680d62acd2782453e3