pytorch
075024b9 - [Static Runtime] Fix a bug that assigns multiple outputs to single storage (#63012)

Commit
3 years ago
[Static Runtime] Fix a bug that assigns multiple outputs to single storage (#63012) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63012 This change fixes a bug that the static runtime's memory optimizer assigns multiple outputs of a node to the same storage. Fixing this bug enables the static runtime to run `inline_cvr` with its memory optimizer enabled. A problematic line from `inline_cvr` was as follows: ``` %7767 : Tensor, %getitem_6419.1 : Tensor = fb::gather_ranges(%tensor74.1, %7764) ``` where enabling the memory optimizer assigns `%7767` and `%getitem_6419.1` to the same storage, which made their data corrupted during the 2nd iteration. This change fixed the aforementioned bug by marking all inputs & outputs of a node as `alive` during our liveness analysis. By doing that, no inputs / outputs will collide with each other. I believe this is a fair assumption that most ops' implementation always has, but missing in our analysis before this change. Test Plan: - Added a unittest `StaticRuntime.ValuesShareSameStorageDoesNotContainOutputsFromSameNode` to cover the new code. Reviewed By: hlu1 Differential Revision: D30202018 fbshipit-source-id: 10287a1bee9e86be16a5201e9a7cd7c7f046bab9
Author
Parents
Loading