pytorch
0909639c - fix dispatch declaration bug about quantized op (#83649)

Commit
2 years ago
fix dispatch declaration bug about quantized op (#83649) # Motivation: Fixes issue #83051. _fake_quantize_learnable_per_tensor_affine_backward and _fake_quantize_learnable_per_channel_affine_backward are implemented for CPU and CUDA. Currently, these two are in the CompositeImplicitAutograd category. If this issue is not fixed. We need to provide their autograd function when we want to register a new backend. It doesn't make sense to implement autograd function for them since they are all backward operators implemented directly with TensorIterators. # Solution: Add a dispatch keyword in aten/src/ATen/native/native_functions.yaml and explicitly dispatch operators to CPU and CUDA. like this: ` dispatch:` ` CPU, CUDA: _fake_quantize_learnable_per_tensor_affine_backward` # Additional context: No additional unit test because this change could not affect PyTorch's functionality. It only affects registration on other backends, like XPU. So it is difficult to add ut to test it. Pull Request resolved: https://github.com/pytorch/pytorch/pull/83649 Approved by: https://github.com/jerryzh168
Author
Committer
Parents
Loading