Port fuse_linear from pytorch/tvm (#25623)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25623
Port over fuse_linear pass from pytorch/tvm project, we'll need this
in backend specific quantization pass to match aten::linear and swap
it with quantized linear
Test Plan:
python test/test_jit.py 'TestJit.test_fuse_linear'
Imported from OSS
Differential Revision: D17208890
fbshipit-source-id: f4ff3889ae4525797d3b986f46ae37e50ea49116