pytorch
948cd61a - add fallthrough kernel for AutogradMeta key (#94603)

Commit
1 year ago
add fallthrough kernel for AutogradMeta key (#94603) The other `Autograd[Backend]` keys all have fallthrough kernels registered to them, but `AutogradMeta` was missing the fallthrough kernel. This is a problem for custom ops that don't have autograd support, if you try to run them with meta tensors. If you have a custom op, and register a CPU and a Meta kernel, then: (1) if you run the op with cpu tensors, it will dispatch straight to the CPU kernel (as expected) (2) if you run the op with meta tensors, you will error - because we don't have a fallthrough registered to the AutogradMeta key, we will try to dispatch to the AutogradMeta key and error, since the op author hasn't provided an autograd implementation. Here's a repro that I confirmed now works: ``` import torch from torch._dispatch.python import enable_python_dispatcher from torch._subclasses.fake_tensor import FakeTensorMode lib = torch.library.Library("test", "DEF") impl_cpu = torch.library.Library("test", "IMPL", "CPU") impl_meta = torch.library.Library("test", "IMPL", "Meta") def foo_impl(x): return x + 1 lib.define("foo(Tensor a) -> Tensor") impl_meta.impl("foo", foo_impl) impl_cpu.impl("foo", foo_impl) with enable_python_dispatcher(): a = torch.ones(2, device='meta') print("@@@@@") b = torch.ops.test.foo.default(a) print(b) ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/94603 Approved by: https://github.com/ezyang, https://github.com/albanD
Author
Committer
Parents
Loading