fx quant: fix bug with fusion patterns and disabling quantization (#54654)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54654
Fixes a bug where disabling quantizaton on potential fusion patterns
would lead to errors in the `convert` function. For example:
1. have a model with add-relu
2. disable quantization for the part of the model containing add-relu
3. run prepare and convert, the convert step would fail because
intermediate nodes were missing from `env`.
The fix is to add handling for this edge case. If quantization is
disabled, we manually copy the nodes for multi-node fusion patterns.
Test Plan:
```
python test/test_quantization.py TestQuantizeFx.test_fusion_pattern_unquantized
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D27318454
fbshipit-source-id: 27c1fd1cb7c9711a8e8d338200971c428dae8f98