[quant][pt2e] Add support for conv bn fusion in et backend config (#97389)
Batch Norm was supported by XNNPACK via fusion with the preceding convolution op. We do the same here by fusing across q -> dq nodes.
We must update the original pass in order to fuse convolution weight/bias with batch norm parameters, this way quantization is supported for batch norm
Differential Revision: [D43976324](https://our.internmc.facebook.com/intern/diff/D43976324/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97389
Approved by: https://github.com/salilsdesai