pytorch
fcbc34a5 - [PyTorch][Static Runtime] Avoid recomputing input size in dict_unpack (#71252)

Commit
4 years ago
[PyTorch][Static Runtime] Avoid recomputing input size in dict_unpack (#71252) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/71252 Same old problem, same old solution. Interestingly, I tried using c10::irange instead, but that caused really bad assembly to be generated -- we lost inlining for lots of the loop body! ghstack-source-id: 146939573 Test Plan: CI Spot-checked assembly before/after and confirmed that loop termination value was recomputed before and not after Reviewed By: mikeiovine Differential Revision: D33558118 fbshipit-source-id: 9fda2f1f89bacba2e8b5e61ba432871e973201fe
Author
Parents
Loading