Remove supports_named_tensor from codegen entirely. (#38739)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38739
Instead of codegenning the named tensor support checks into
CPUType/CUDAType, we instead add a new dispatch key that is put
into tensor whenever it has names. By default, the fallback
implementation says that named tensors are not supported, but
if they are supported, we register a fallthrough which lets
us through to the true backend implementation.
There are a bunch of small pieces which are necessary to make this
happen:
- NameMode now also excludes DispatchKey::Named from the dispatch set
- To avoid bad error messages, we add a teensy special case to
the dispatcher for named_not_supported_kernel: if we see that
the boxed kernel we need to invoke from unboxed is this kernel,
but we don't support boxing, but it's a kernel which is known
to not need boxing, we just pass in nullptr for the stack.
The special case here is very nice: it doesn't affect the fast
path and only gets exercised when things are not supported.
- I need to add support for per operator fallthrough registration.
This is done similarly to how we support fallthrough fallback,
by just keeping track if the registered kernel for an operator
is a fallthrough.
It is possible we could go even further down this path, and move
the named tensor logic itself into this key. I leave this
up to future work.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D21662643
Pulled By: ezyang
fbshipit-source-id: 5bc6ae14a1f600189bd8bf865f74dd1700d932f7