pytorch
65e5bd23 - [quant] Add _FusedModule type to capture all fused modules for quantization (#47484)

Commit
4 years ago
[quant] Add _FusedModule type to capture all fused modules for quantization (#47484) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47484 Test Plan: Imported from OSS Reviewed By: z-a-f Differential Revision: D24774703 fbshipit-source-id: f0efc5d77035b9854ec3e31a1d34f05d5680bc22
Author
Parents
Loading