pytorch
7ab6f56c - [quant][core] Add quantize/dequantize ops for decomposed quantized Tensor representation (#87093)

Commit
2 years ago
[quant][core] Add quantize/dequantize ops for decomposed quantized Tensor representation (#87093) Summary: Added q/dq implementation for out of core (decomposed) quantized Tensor representation, meaning that instead of storing quantization parameters (e.g. scale/zero_point) in a separate quantized Tensor object, we will store quantization parameters in the argument of operators. ``` quantize(float32_tensor, scale, zero_point, dtype) -> int8_tensor dequantize(int8_tensor, scale, zero_point, dtype) -> float32_tensor ``` Test Plan: python test/test_quantization.py TestQuantizedTensor.test_decomposed_quantize python test/test_quantization.py TestQuantizedTensor.test_decomposed_dequantize Reviewers: Subscribers: Tasks: Tags: Pull Request resolved: https://github.com/pytorch/pytorch/pull/87093 Approved by: https://github.com/dzdang, https://github.com/z-a-f
Author
Committer
Parents
Loading