[Quant] Separate FBGEMM/QNNPACK BackendConfigs (#83566)
Summary: Previously we use a single BackendConfig
(get_native_backend_config) for both the FBGEMM and QNNPACK
backends. However, these two backends have subtle differences
in terms of their requirements that cannot be satisfied using
a single BackendConfig. Therefore, this commit is the first step
torwards decoupling the two backends. The real change in
functionality will come in a future commit after DTypeConfig
supports quant_min/quant_max and scale_min/scale_max. Existing
uses of `get_native_backend_config` should not be affected.
Public facing changes:
```
from torch.ao.quantization.backend_config import (
get_fbgemm_backend_config,
get_qnnpack_backend_config,
)
fbgemm_backend_config = get_fbgemm_backend_config()
qnnpack_backend_config = get_qnnpack_backend_config()
```
Test Plan:
python test/test_quantization.py TestQuantizeFx
python test/test_quantization.py TestQuantizeFxOps
Reviewers: jerryzh168
Subscribers: jerryzh168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83566
Approved by: https://github.com/jerryzh168