Override Quantized Backend to use Fbgemm in Qlinear Packed Params Test (#86236)
Summary: After D39934051, we must explicitly ```override_quantized_engine('fbgemm')``` for this test to work
Test Plan:
```
buck test //caffe2/test:ao -- TestQlinearPackedParams
```
Before:
```
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/5629499663624574
✓ ListingSuccess: caffe2/test:ao : 72 tests discovered (32.830)
✓ Pass: caffe2/test:ao - test_qlinear_packed_params_qnnpack (ao.sparsity.test_qlinear_packed_params.TestQlinearPackedParams) (25.085)
✗ Fail: caffe2/test:ao - test_qlinear_packed_params (ao.sparsity.test_qlinear_packed_params.TestQlinearPackedParams) (26.706)
Test output:
> RuntimeError: Didn't find engine for operation ao::sparse::qlinear_prepack X86
```
After:
```
Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/7599824485968786
✓ ListingSuccess: caffe2/test:ao : 72 tests discovered (31.082)
✓ Pass: caffe2/test:ao - test_qlinear_packed_params_fbgemm (ao.sparsity.test_qlinear_packed_params.TestQlinearPackedParams) (100.409)
✓ Pass: caffe2/test:ao - test_qlinear_packed_params_qnnpack (ao.sparsity.test_qlinear_packed_params.TestQlinearPackedParams) (100.544)
Summary
Pass: 2
ListingSuccess: 1
```
Differential Revision: D40078176
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86236
Approved by: https://github.com/jmdetloff, https://github.com/z-a-f