pytorch
48e675ac - fx quant: fix subtle bug in BinaryOpQuantizeHanlder logic in matching (#56294)

Commit
3 years ago
fx quant: fix subtle bug in BinaryOpQuantizeHanlder logic in matching (#56294) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56294 When matching a pattern to `BinaryOpQuantizeHandler`, we need to make sure we check for dtype support on the base node, instead of the current node. This is important in cases such as `add-relu` and `mul-relu`, when the current node is `relu`, but the base node is `add|mul`. Test Plan: ``` python test/test_quantization.py TestQuantizeFx ``` There is no good test case to check this in current logic. Created an add-relu model manually, and verified with pdb that the add node was being used to match against dtypes. Imported from OSS Reviewed By: jerryzh168 Differential Revision: D27831070 fbshipit-source-id: 3697f1328dff9fec3eb910bae49a73793ef36d63
Author
Parents
Loading