pytorch
87a1ebc9 - fix RegistrationDeclarations.yaml, now that we codegen composite kernels for structured functional/inplace ops (#56307)

Commit
3 years ago
fix RegistrationDeclarations.yaml, now that we codegen composite kernels for structured functional/inplace ops (#56307) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56307 This should fix https://github.com/pytorch/pytorch/issues/56273. I tested these changes locally by making them directly on top of https://github.com/pytorch/pytorch/pull/56151, and running the xla tests (`xla/test/cpp/build/test_ptxla`). **Current state:** For ops that are ported to structured, If external backends like XLA have implemented the `out` op but not the `functional` version, they will call into our code-generated `CompositeExplicitAutograd` kernel, which calls the structured operator's `meta()` function and then redispatches to the external backend's `out` function. If XLA has registered their own kernel to the `functional` variant of the op, it'll override our codegen'd composite kernel. XLA has logic to code-generate "CPU fallback" kernels for "required" ops. It gets this information based off of `RegistrationDeclarations.yaml`. That info was technically incorrect up until this PR, since we were code-generating `inplace/functional` composite kernels for structured ops, but not updating `RegistrationDeclarations.yaml` with that information. Test Plan: Imported from OSS Reviewed By: bhosmer Differential Revision: D27883950 Pulled By: bdhirsh fbshipit-source-id: fe896b0d2bbd4369490dcdf7a87f227fd3d8b8b3
Author
Parents
Loading