Add support to call unpack for pytorch mobile quantized FC and Conv (#26211)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26211
Currently QNNPACK does not have an unpack function like FBGEMM does.
In order to be able to script quantized models for mobile, we need to save unpacked weights.
This change stores the original weights and bias in the opaque struct and simply returns it when unpack is called
Test Plan:
python test/test_quantized.py TestQNNPackOps.test_qconv_unpack
python test/test_quantized.py TestQNNPackOps.test_qlinear_unpack
Imported from OSS
Differential Revision: D17464430
fbshipit-source-id: 83ad5a2556dcf13245a1047feef6cfb489c9ef69