[SR] Make fused_sigrid_transforms work on graph outputs (#71507)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71507
We previously disabled `FuseListUnpack` if the fused outputs of the op would alias the graph outputs. The concern was that some ops were assuming that `p_node->Output(0).isTensor()` implies `p_node->Output(i).isTensor()` for all `i > 0`. This condition can be violated if there exists both managed and unmanaged tensors in the output list.
Instead of adding this special case and missing out on some fusions, we should implement fused ops correctly.
Reviewed By: d1jang
Differential Revision: D33669034
fbshipit-source-id: 8b291b5fe610ffbe47b88a5a018daa63cb5665b0
(cherry picked from commit c6cba235a69da92b97b1a02d5c33065bb09eb0a9)