[quant] Add a quantize_per_tensor overload that takes Tensor quantization parameters (#59773)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59773
Current quantize_per_tensor takes float scale and int zero_point, which does not work with Proxy,
this PR adds a quantize_per_tensor overload that takes Tensor scale and zero_point instead.
Test Plan:
Tested locally that following runs without errors:
```python
import torch
from torch.quantization.quantize_fx import prepare_fx, convert_fx
from torch.fx.experimental import normalize
class TestModule(torch.nn.Module):
def forward(self, x):
return x + x
mod = TestModule()
mod.eval()
config = {"": torch.quantization.get_default_qconfig("fbgemm")}
mod = prepare_fx(mod, config)
mod = convert_fx(mod)
mod = torch.fx.Transformer(mod).transform()
```
Imported from OSS
Reviewed By: vkuzo
Differential Revision: D29019862
fbshipit-source-id: c0176040f3b73f0a30516ed17d261b44cc658407