pytorch
b2a55076 - Fix deadlock in some edge case in autograd (#73961)

Commit
3 years ago
Fix deadlock in some edge case in autograd (#73961) Summary: Minimal example that deadlocks before but not after: ```python import torch from torch.autograd import Function class Foo(Function): staticmethod def forward(ctx, x): return x.clone() staticmethod def forward(ctx, gO): return gO.clone() def get_out(): inp = torch.rand(2, requires_grad=True) # The python function is first so that it runs # last in the backward pass right = Foo.apply(inp) # An op that creates new memory left1 = inp.clone() # An op that saves its input left2 = left1 ** 2 # Inplace modify so that the backward for # left2 always raises an error left1 += 1 # An op that takes both side as input. # After running, both side's last op will be in # the ready queue # And the op for left will run first as it was # executed last during the forward out = left2 + right return out # Nothing should be global variables here as, from what # I can see, python leaks all the global objects get_out().sum().backward() ``` Since this requires the python interpreter to die, it is hard to test in CI. Let me know if you have an idea how to do it though. Pull Request resolved: https://github.com/pytorch/pytorch/pull/73961 Reviewed By: malfet Differential Revision: D34752747 Pulled By: albanD fbshipit-source-id: 1a537b1f733e161e8d3ff053cd432b37b34d432a (cherry picked from commit 17943e4c04c782d81deab439e010195f04e75bbd)
Author
Committer
Parents
Loading