[quant] Add fused "q - qlinear - dq" operator with skipped quant op for output of linear (#89882)
Summary:
Added two ops:
* torch.ops.quantized.linear_with_input_q_dq_qweight_dq_output_fp32
* torch.ops.quantized.linear_with_input_q_dq_qweight_dq_relu_output_fp32
corresponding pattern for `linear_with_input_q_dq_qweight_dq_output_fp32` would be:
```
input -> q* -> dq* -> linear* ->
qweight -> dq* /
```
Test Plan:
python test/test_quantization.py -k TestQuantizedLinear.test_qlinear_with_input_q_dq_qweight_dq
Reviewers:
Subscribers:
Tasks:
Tags:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89882
Approved by: https://github.com/vkuzo