pytorch
864ab936 - aot_autograd: avoid using intermediate_base logic unnecessarily (#97786)

Commit
1 year ago
aot_autograd: avoid using intermediate_base logic unnecessarily (#97786) fixes https://github.com/pytorch/pytorch/issues/97691, see the issue for the proposed design. Now that we are employing AOTAutograd's "intermediate base" logic a lot less frequently, we might see some speedups in the benchmark suite. Pull Request resolved: https://github.com/pytorch/pytorch/pull/97786 Approved by: https://github.com/jansel, https://github.com/soulitzer
Author
Committer
Parents
Loading