[Quant][FX] Lower QLinearLeakyReLU for onednn backend (#88668)
**Summary**
Add quantization mappings for `QLinearLeakyReLU` for int8 inference for onednn backend. The fusion and lowering is supported only in FX mode.
**Test plan**
python test_quantization.py TestQuantizeFx
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88668
Approved by: https://github.com/jgong5, https://github.com/jerryzh168