fx quant: do not observe bias on F.linear (#49628)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49628
Ensures that linear bias is not observed in a `F.linear` call. This should
be a small speedup in PTQ, and will change numerics (in a good way) for
QAT if someone is using `F.linear`.
Note: the implementation is slightly more verbose compared to conv
because bias is a keyword argument in Linear.
Test Plan:
```
python test/test_quantization.py TestQuantizeFxOps.test_linear_functional_bias_not_observed
```
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D25653532
fbshipit-source-id: c93501bf6b55cbe4a11cfdad6f79313483133a39