dbr quant overhead [14/x]: cache whether an op is a module (#68877)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68877
Saves whether an op type is a module during tracing, so we
can avoid recalculating this when validating the op during inference.
This leads to a small speedup.
Test Plan:
```
python test/test_quantization.py TestQuantizeDBR
```
```
// MobileNetV2, 1x3x224x224, function level profiling
// before
validate_cur_op - 1.77%
// after
validate_cur_op - 1.41%
```
Reviewed By: jerryzh168
Differential Revision: D32646149
Pulled By: vkuzo
fbshipit-source-id: 03ebc4fedceb84bb885939dff8dec81d30ba6892