[quant][fx] Merge is_general_tensor_shape_op into is_general_tensor_value_op in QuantizeHandler (#74601)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74601
Currently the behavior for general tensor shape op and general tensor value op are the same, so we can remove
this flag and merge with the is_general_tensor_value_op flag.
is_general_tensor_value_op flag is used in two places in prepare:
(1). dtype propgation: we only do dtype propgation when this flag is true (this will be refactor in the future to be more systematic)
(2). observer sharing, we'll use the input observer instance as output observer for an op if this flag is True
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Imported from OSS
Reviewed By: george-qi
Differential Revision: D35071438
fbshipit-source-id: 5e8f5fd84e37db0433a63fe0a0e212ce3c5908d6
(cherry picked from commit b4bbc9fa0e65f3768eb97ca8e84b7cbd7e840b67)