pytorch
916af892 - [quant][fx] Update name of packed weight attributes (#51259)

Commit
4 years ago
[quant][fx] Update name of packed weight attributes (#51259) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51259 Store the FQN of the module that is using the packed weights (the quantized op) In the case of fusion we update the scope mapping to store the module path of the fused node. Test Plan: python test/test_quantization.py test_packed_weight_fused_op Imported from OSS Reviewed By: jerryzh168 Differential Revision: D26117964 fbshipit-source-id: 9d929997baafb1c91063dd9786a451b0040ae461
Author
Parents
Loading