pytorch
2394e6ba - [quant][fx] Change prepare_fx and convert_fx to preserve the GraphModule type of input (#94412)

Commit
1 year ago
[quant][fx] Change prepare_fx and convert_fx to preserve the GraphModule type of input (#94412) Summary: Previously prepare_fx returns an ObservedGraphModule and convert_fx returns a QuantizedGraphModule, this is to preserve the attributes since torch.fx.GraphModule did not preserve them, after https://github.com/pytorch/pytorch/pull/92062 we are preserving `model.meta`, so we can store the attributes in model.meta now to preserve them. With this, we don't need to create a new type of GraphModule in these functions and can use GraphModule directly, this is useful for quantization in pytorch 2.0 flow, if other transformations are using GraphModule as well, the quantization passes will be composable with them Test Plan: python test/test_quantization.py TestQuantizeFx python test/test_quantization.py TestQuantizeFxOps python test/test_quantization.py TestQuantizeFxModels python test/test_quantization.py TestQuantizePT2E Imported from OSS Differential Revision: D42979722 Pull Request resolved: https://github.com/pytorch/pytorch/pull/94412 Approved by: https://github.com/vkuzo
Author
Committer
Parents
Loading