quant: throw a nice error message for allclose with quantized inputs (#49802)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49802
Currently `torch.allclose` is not supported with quantized inputs.
Throw a nice error message instead of a cryptic one.
Test Plan:
```
torch.allclose(x_fp32, y_fp32)
torch.allclose(x_int8, y_int8)
```
Imported from OSS
Reviewed By: supriyar
Differential Revision: D25693538
fbshipit-source-id: 8958628433adfca3ae6ce215f3e3ec3c5e29994c