Add DispatchKey impl overload; remove use of torch::dispatch (#35706)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35706
It is extremely common to define implementations of operators at a
specific dispatch key, so we add an overload to impl specifically for
this case. I then delete most uses of torch::dispatch
dispatch_autograd call sites can't make use of this overload. So
instead the new preferred way to specify something as autograd is to
pass kAutograd as the dispatch key (short form, analogous to kCPU/kCUDA
which we support today).
I flip flopped about whether or not kAutograd should have the type
DispatchKey or some other type (to help better encapsulate the
DispatchKey enum); this is more direct and I can't think of any
BC problems from this usage.
Some other reorganization I did:
- I renamed all of the worker functions in op_registration to have
a leading underscore and made them private, just to make it more
clear what the public versus private API were (the private API
shouldn't be used by users because it doesn't come with && overloads)
- In a few places where I was touching lines already, I replaced
full DispatchKey typed out enums with shorter kFoo names, similar
to kAutograd but I didn't publish these globally.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D20775783
Pulled By: ezyang
fbshipit-source-id: e45b289e5d1f86c180b24cf14c63cf4459ab5337