pytorch
dbf43d62 - [quant][fx] Only do reference moduel swapping for floating point fused modules (#74231)

Commit
2 years ago
[quant][fx] Only do reference moduel swapping for floating point fused modules (#74231) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/74231 Add a check to make sure the weighted modules we swap is actually a float fused module, since the reference fused module like reference version of linear - relu would have the same fused type as the floating point linear - relu (and the linear submodule will have different types) Test Plan: phabricator diff for now, can add a test case after we know exactly what the problem is Reviewed By: andrewor14 Differential Revision: D34888290 fbshipit-source-id: a7f53368a7c17f7d1a82afaa50d14d569b4923df (cherry picked from commit 458dac9fdf8b4f0d786bf9c815c2f2fe8df13bb4)
Author
Committer
Parents
Loading