fx quant: add fusion matching for operator.add and torch.relu (#71780)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71780
Adds support for matching operator.add -> torch.relu in FX graph
mode quantization.
It would be nice to support torch.relu better in general, but
saving that for a future PR to keep PRs small.
This is useful for DBR quant because we have some test cases in DBR
quant which use add-relu, and we'd like to match them to FX.
Test Plan:
```
python test/test_quantization.py TestQuantizeFxOps.test_add_relu
python test/test_quantization.py TestQuantizeFxOps.test_mul_relu
```
Reviewed By: jerryzh168
Differential Revision: D33775096
Pulled By: vkuzo
fbshipit-source-id: 889d9b41d3758ecbbb6d7eab67f64ce3d4892d24
(cherry picked from commit c1f9f38ca11161b63c8f6a21eb4a079feb3300f9)