[quant][graphmode][fx] Refactor node_name_to_target_dtype to make it more clear (#68317)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68317
We use the node_name_to_target_dtype to store the target dtype for output activations for each node, computed from qconfig for the node,
there are two problems with node_name_to_target_dtype that makes it hard to work with:
1. we mutate node_name_to_target_dtype when we insert observers, this makes the data structure confusing because it's typically unexpected
to change a data structure that store the "target" dtype
2. currently it only stores target dtype about output activations, while we also need target dtype for input activation, weight and bias
This PR fixes both problem by removing mutation from the node_name_to_target_dtype and expanding the target_dtype for node to include
the missing target dtype for input activation, weight and bias. We will have another refactor to simplify the observation for weight and bias dtype
in the future.
Please see comments for the updated structure of node_name_to_target_dtype
TODO: we may want to rename node_name_to_target_dtype to node_name_to_target_dtype_info in a separate PR.
Test Plan:
```
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
```
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D32411858
fbshipit-source-id: 3d76dd65056920ff8642899517bc1b95d43fc1de