pytorch
b898bdd4 - [JIT] Don't re run CSE on every block (#41479)

Commit
4 years ago
[JIT] Don't re run CSE on every block (#41479) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/41479 Previously we were re-running CSE every time we recursed into a new block, which in turn created a new Alias Db for the whole graph. This was O(# Nodes * # Blocks). For graphs which don't have any autodiff opportunities, such as Densenet, create_autodiff_subgraphs is now linear in number of nodes. For Densenet this pass was measured at ~.1 seconds. This pass is still non-linear for models which actually do create autodiff subgraphs, because in the ``` bool any_changed = true; while (any_changed) { AliasDb aliasDb(graph_); any_changed = false; for (auto it = workblock.end()->reverseIterator(); it != workblock.begin()->reverseIterator();) { bool changed; std::tie(it, changed) = scanNode(*it, aliasDb); any_changed |= changed; } } ``` loop we recreate the AliasDb (which is O(N)) every time we merge something and scan node returns. I will make that linear in next PR in the stack. Test Plan: Imported from OSS Reviewed By: Krovatkin Differential Revision: D22600606 Pulled By: eellison fbshipit-source-id: b08abfde2df474f168104c5b477352362e0b7b16
Author
Elias Ellison
Parents
Loading