Change quantizer to account for input tensor's memory format. (#42178)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42178
This otherwise introduces unnecessary calls to contiguous in the rest of
the network, where certain ops want channels last format.
Test Plan:
Quantization tests.
Imported from OSS
Reviewed By: jerryzh168
Differential Revision: D22796479
fbshipit-source-id: f1ada1c2eeed84991b9b195120699b943ef6e421