pytorch
086ce765 - Add new parameter `materialize_grads` to torch.autograd.grad() (#97015)

Commit
1 year ago
Add new parameter `materialize_grads` to torch.autograd.grad() (#97015) Fixes #44189 Adds a new parameter, zero_grad_unused, to the torch.autograd.grad() function. This parameter allows for the gradient to be set to 0 instead of None when a variable is unused, which can be helpful for higher-order partial differentials. Here is an example of using this new parameter to solve d^3y/dx^3 given y = a * x: ```python x = torch.tensor(0.5, dtype=torch.float32, requires_grad=True) a = torch.tensor(1, dtype=torch.float32, requires_grad=True) y = x * a dydx = torch.autograd.grad(y, x, create_graph=True, allow_unused=True) d2ydx2 = torch.autograd.grad(dydx, x, allow_unused=True, zero_grad_unused=True) try: d3ydx3 = torch.autograd.grad(d2ydx2, x, allow_unused=True, zero_grad_unused=True) except RuntimeError as e: assert False, "Should not raise error" ``` With `zero_grad_unused`, d2ydx2 could be 0 instead of None, enabling d3ydx3 to be calculated as defined in math without throwing an error. Pull Request resolved: https://github.com/pytorch/pytorch/pull/97015 Approved by: https://github.com/soulitzer
Author
Committer
Parents
Loading