pytorch
02f6d14b - Only allow SymInt across partitioner boundaries, and fixes (#96653)

Commit
1 year ago
Only allow SymInt across partitioner boundaries, and fixes (#96653) This PR does a few things all at once, as I needed to fix several bugs on the way here. The main goal of the PR is to fix the `'float' object has no attribute '_has_symbolic_sizes_strides'` error. The general idea is to heavily penalize non-SymInt but still SymNode cuts in the graph. This doesn't work for default partitioner, so essentially, dynamic shapes with default partitioner is not supported. While doing this, I had a fix a few other bugs in the partitioner: * SymNode operations weren't considered recomputable. But they are very cheap, go wild. * zeros_like wasn't considered recomputable, and this prevented some gradient formulas (e.g., for angle with real inputs) from successfully finding a cut at all * AOTAutograd tests use the default partitioner. I switch them to use min-cut partitioner... * ...but this reveals a bug where if we have nodes in backward outputs that don't depend on tangents, they never get assigned to the backward graph. I fix this by making the backward outputs mandatory to be in backwards. I have to be careful to filter out None backward outputs; those never participate in flow analysis! This causes some wobbling for the min-cut tests, but these seem legitimate: since we're now willing to recompute, the partitioner can reduce the number of SymInts it transmits by just doing some recompute in the backend. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/96653 Approved by: https://github.com/ngimel
Author
Committer
Parents
Loading