pytorch
8066e89f - quant: fix bug with copy.deepcopy of FX prepared quantization models (#46895)

Commit
4 years ago
quant: fix bug with copy.deepcopy of FX prepared quantization models (#46895) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46895 Bug: models after the FX graph mode quant prepare step lost information, such as the extra attributes defined in `Quantizer.save_state`, if the user performed `copy.deepcopy` on them. The information was lost because `GraphModule` does not copy attributes which are not present on `nn.Module` by default. Fix: define a custom `__deepcopy__` method on observed models and whitelist the attributes we care about. This is needed because users sometimes run `copy.deepcopy` on their models during non-quantization related preparations, and we should make sure that quantization related state survives these calls. Test Plan: ``` python test/test_quantization.py TestQuantizeFx.test_deepcopy python test/test_quantization.py TestQuantizeFx.test_standalone_module ``` Imported from OSS Reviewed By: jerryzh168 Differential Revision: D24556035 fbshipit-source-id: f7a6b28b6d2225fa6189016f967f175f6733b124
Author
Parents
Loading