[Quant][XNNPACK] Delegate add_relu fusion (#103266)
Quantized Resnet currently sees fused add-relu
```
--> dq
\
add --> relu --> quant
/
--> dq
```
Let us support this fusion in the delegate as xnnpack can use the output_min and output_max of the op nodes to clamp the values and perform a fused add - relu operation
Differential Revision: [D45258028](https://our.internmc.facebook.com/intern/diff/D45258028/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103266
Approved by: https://github.com/jerryzh168