dbr quant: support for custom leaf modules, part 1/x (#70330)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/70330
Starts adding support for custom leaf modules, part 1/x.
In this PR, we ensure that leaf modules and all of their children
do not get `AutoQuantizationState` objects attached to them.
The API is matching prepare_fx, using the `prepare_custom_config_dict`
argument and the `non_traceable_module_class` key within that dict.
The next couple of PRs will ensure that modules and functions in
leaves do not get quantized, keeping it separate to make PRs smaller.
Test Plan:
```
python test/test_quantization.py TestQuantizeDBR.test_prepare_custom_config_dict_non_traceable_module_class
```
Reviewed By: jerryzh168
Differential Revision: D33285310
Pulled By: vkuzo
fbshipit-source-id: 532025fda5532b420fad0a4a0847074d1ac4ad93