pytorch
3b6b27e9 - Add a miniature backend select implementation for prims (#82311)

Commit
2 years ago
Add a miniature backend select implementation for prims (#82311) It turns out that for factory function prims (prims with no Tensor arguments), we were always going to the ATen implementation of the operator. Prior to the next PR in this stack, the change is a bit hard to test, but you can indirectly observe the impact by running arange with trace dispatching on (well, you need https://github.com/pytorch/pytorch/pull/82277 patched in too.) ``` $ TORCH_SHOW_DISPATCH_TRACE=1 python -c "import torch._refs; torch._refs.arange(4, device='meta')" [callBoxed] op=[prims::arange], key=[BackendSelect] [call] op=[aten::empty_strided], key=[BackendSelect] [redispatch] op=[aten::empty_strided], key=[Meta] ``` Previously, the prims::arange call was dispatching to Undefined. For maximum fidelity, technically we're supposed to redispatch to a specific dispatch key, but the Python bindings to do this don't exist and it was easy to route to the implementations which we already intended to go to. We would have to fix this if we wanted external backends to register custom implementations to OTHER dispatch keys via Python op registration. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/82311 Approved by: https://github.com/ngimel, https://github.com/bdhirsh
Author
Committer
Parents
Loading