dbr quant overhead[3/x]: speed up AutoQuantizationState.mark_cur_op_complete (#68350)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68350
`torch.nn.Module` has overhead for getting and setting attributes because
it does various type checks on the attribute.
This PR explicitly gets and sets the right thing for this particular
function, avoding the type checks. Model level benchmarks are too noisy,
but according to function level profiling this reduces the time spent in
this function in a quantized model from 2.60% to 0.53%, on MobileNetV2 with
input size 1x3x224x224.
Test Plan:
```
python test/test_quantization.py TestQuantizeDBR
```
Reviewed By: albanD
Differential Revision: D32463751
Pulled By: vkuzo
fbshipit-source-id: a29beed2a2b87ca4df675a30dd591f797c8a1dbe