[Quant][Inductor] Enable qlinear weight prepack inside inductor constant folding (#106782)
**Summary**
To realize weight prepack for quantized linear, we replace the following pattern
```
int8 activation
|
dequant_per_tensor
|
mm/addmm <- t <- dequant_per_channel <- int8_weight
```
with
```
int8 activation
|
onednn.qlinear_pointwise <- onednn.qlinear_prepack <- int8_weight
```
And we register weight prepack path inside inductor constant folding. Constant folding evaluates the prepack op and replace it with prepacked weight (a constant parameter)
**Test plan**
python test/inductor/test_mkldnn_pattern_matcher.py -k test_qlinear_unary
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106782
Approved by: https://github.com/jgong5, https://github.com/leslie-fang-intel, https://github.com/eellison
ghstack dependencies: #105818, #106781