pytorch
008ab27b - [quant][pyper] Add embedding_bag weight quantize and dequantize ops (#41293)

Commit
4 years ago
[quant][pyper] Add embedding_bag weight quantize and dequantize ops (#41293) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41293 Add new operators that does quantize and packing for 8 bit and 4 bit embedding bag operators. This is an initial change to help unblock testing. This will be follwed by adding graph mode passes to enable quantization of embedding_bag module Note to reviewers: Future PRs will replace this op with a separate quantize and pack operator and add support for floating point scale and zero point. Test Plan: python test/test_quantization.py TestQuantizedEmbeddingBag Imported from OSS Reviewed By: vkuzo Differential Revision: D22506700 fbshipit-source-id: 090cc85a8f56da417e4b7e45818ea987ae97ca8a
Author
Parents
Loading