pytorch
62fad315 - fix per-dispatchkey-mode caching bug (#98030)

Commit
1 year ago
fix per-dispatchkey-mode caching bug (#98030) The bug was that: if you want to move a mode to the autograd key, we need to use the "functionality" key for it (AutogradFunctionality). But when we do that, we need to clear any PythonDispatcher caches for every op for **every** autograd key (since you could run autograd ops with both cpu and cuda tensors underneath the mode, which both may have been cached). I didn't add a test, since this ends up getting indirectly tests by export in the PR. If someone would prefer a direct test I can add one. Pull Request resolved: https://github.com/pytorch/pytorch/pull/98030 Approved by: https://github.com/ezyang
Author
Committer
Parents
Loading