Fold weight permutation inside quantized conv operator (#26241)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26241
According to https://github.com/pytorch/pytorch/issues/19092 we always keep NCHW order and do handling inside the kernels. This PR fixes it for weights of the qconv by using MemoryLayout mechanism.
Test Plan: Imported from OSS
Differential Revision: D17443219
Pulled By: dzhulgakov
fbshipit-source-id: ce0eb92034a9977b3303dafab8b0414575171062