Make resize_ use normal device dispatch (#42240)
Summary:
`resize_` only requires manual registration to `Autograd` key and its device kernels can safely live together with our normal device dispatch in `native_functions.yaml`.
But currently we do manual registration for `CPU/CUDA` kernels (and leaves no dispatch in native_functions.yaml) which makes `resize_` non-overrideable from backend point of view. While it indeed should dispatch at device level, this caused xla to whitelist `resize_` and register a lowering to XLA key. This PR moves the device dispatch of `resize_` back to `native_functions.yaml` so that it shows up as `abstract` method properly for downstream extensions.
Note that we also do manual registration for `copy_/detach_/resize_as_/etc` in aten but they are slightly different than `resize_` since for them we only register `catchAll` kernels instead of device kernels. I'll need to investigate and send a followup PR for those ops.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/42240
Reviewed By: VitalyFedyunin
Differential Revision: D22846311
Pulled By: ailzhang
fbshipit-source-id: 10b6cf99c4ed3d62fc4e1571f4a2a463d1b88c81