pytorch
9a56997f - [dtensor][5/N] add cached propagator for TP (#90734)

Commit
2 years ago
[dtensor][5/N] add cached propagator for TP (#90734) This PR adds a cached propagator for TP use, it caches the sharding prop decision for the same input sharding on an operator. This could improve eager mode performance. Differential Revision: [D42876249](https://our.internmc.facebook.com/intern/diff/D42876249) Pull Request resolved: https://github.com/pytorch/pytorch/pull/90734 Approved by: https://github.com/XilunWu, https://github.com/fduwjj
Author
Committer
Parents
Loading