pytorch
7560a7b2 - [Quant] Respect non_leaf_module_list for activation modules (#88498)

Commit
2 years ago
[Quant] Respect non_leaf_module_list for activation modules (#88498) Summary: This commit fixes the bug where `non_leaf_module_list` was not respected for activation modules like `torch.nn.Sigmoid` and `torch.nn.Tanh`. Today, these modules default to `default_fixed_qparams_range_0to1_fake_quant`, and there is no way to configure them to use any other activation_post_process (e.g. FixedQParamsObserver) (see this [mapping](https://github.com/pytorch/pytorch/blob/dc00bb51b8d370bf3891f0edb2c6e0c2914e329a/torch/ao/quantization/quantization_mappings.py#L188-L193)). `non_leaf_module_list` is a "list of non-leaf modules we want to add observer" (see prepare docstring). If the user explicitly specified to insert observers for these modules, we should respect that instead of continuing to use the default. Test Plan: python test/test_quantization.py TestQuantizeEagerPTQStatic.test_activations_in_non_leaf_module_list Reviewers: vkuzo, jerryzh168 Subscribers: vkuzo, jerryzh168 Pull Request resolved: https://github.com/pytorch/pytorch/pull/88498 Approved by: https://github.com/jerryzh168
Author
Committer
Parents
Loading