pytorch
1478e5ec - [quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415)

Commit
4 years ago
[quant] Remove nn.quantized.ReLU module and nn.quantized.functional.relu (#47415) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/47415 nn.ReLU works for both float and quantized input, we don't want to define an nn.quantized.ReLU that does the same thing as nn.ReLU, similarly for nn.quantized.functional.relu this also removes the numerical inconsistency for models quantizes nn.ReLU independently in qat mode Test Plan: Imported from OSS Reviewed By: z-a-f Differential Revision: D24747035 fbshipit-source-id: b8fdf13e513a0d5f0c4c6c9835635bdf9fdc2769
Author
Parents
Loading