ns for fx: fix bug to enable again on torchvision models
Summary:
The tests were disabled by https://github.com/pytorch/pytorch/pull/61687, but
this specific behavior broke some time after while these tests were disabled.
The issue was that:
1. `torch.add` is present in these models
2. In the common codepath of comparing fp32 to int8, torch.ops.quantized.add was already filtered out because it did not have a dtype specified
3. In the less common codepath of comparing fp32 to fp32, torch.add was eligible for shadowing, but the logic was broken
This PR fixes (3) by disabling shadowing on ops which do not support it, by op type.
The support may be built later, if needed.
Test plan:
```
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels.test_resnet18
python test/test_quantization.py TestFXNumericSuiteCoreAPIsModels.test_mobilenet_v2
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75472
Approved by: https://github.com/jerryzh168