[quant] Add quantized::leaky_relu that takes scale/zero_point as input (#45702)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45702
https://github.com/pytorch/pytorch/issues/45593
Previously quantized leaky_relu does not require observation and just inherits
the quantization parameters from input, but that does not work very well in qat
This PR added a quantized::leaky_relu that has observation for output and it will
become the default leaky_relu that our quantization tools produce (eager/graph mode)
Test Plan: Imported from OSS
Reviewed By: raghuramank100
Differential Revision: D24067681
fbshipit-source-id: d216738344363794b82bd3d75c8587a4b9415bca