pytorch
4ad8ebe7 - quant layer/group/instance norm: make weights and biases optional (#39203)

Commit
4 years ago
quant layer/group/instance norm: make weights and biases optional (#39203) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/39203 Adds logic and test coverage for optional weights and biases for the quantized normalization operators. This was broken before this PR because the `TORCH_LIBRARY` registration had these as required parameters - removed it, and cleaned up the callsites. Note: consolidating the registrations in `native_functions.yaml` as opposed to `library.cpp` after a discussion with ezyang . Test Plan: ``` python test/test_quantization.py TestQuantizedOps.test_qlayer_norm python test/test_quantization.py TestQuantizedOps.test_group_norm python test/test_quantization.py TestQuantizedOps.test_instance_norm python test/test_quantization.py TestStaticQuantizedModule.test_layer_norm python test/test_quantization.py TestStaticQuantizedModule.test_group_norm python test/test_quantization.py TestStaticQuantizedModule.test_instance_norm python test/test_quantization.py TestQuantizeScriptPTSQOps.test_layer_norm python test/test_quantization.py TestQuantizeScriptPTSQOps.test_group_norm python test/test_quantization.py TestQuantizeScriptPTSQOps.test_instance_norm ``` Imported from OSS Differential Revision: D21885259 fbshipit-source-id: 978c7b8bd6c11a03e9e5fdb68f154cb80cc43599
Author
Parents
Loading