pytorch
940959eb - [quant][fix] Add quant_min/quant_max for default dynamic quantization observer (#89267)

Commit
2 years ago
[quant][fix] Add quant_min/quant_max for default dynamic quantization observer (#89267) Summary: This is needed for choose qparams, but previously it is not configurable, and in the reference quantization flow with decomposed Tensor, we are making this explicit Test Plan: tested in future PR Reviewers: Subscribers: Tasks: Tags: Pull Request resolved: https://github.com/pytorch/pytorch/pull/89267 Approved by: https://github.com/vkuzo
Author
Committer
Parents
Loading