pytorch
f4707ae0 - Add arguments to collect_results (#89611)

Commit
2 years ago
Add arguments to collect_results (#89611) Fixes https://github.com/pytorch/torchdynamo/issues/1901. Test script: ```python import copy import torch import torch._dynamo as dynamo import torch._dynamo.config dynamo.config.repro_after = "dynamo" dynamo.config.repro_level = 4 def custom_backend(gm: torch.fx.GraphModule, example_inputs): gm = copy.deepcopy(gm) for node in gm.graph.nodes: if len(node.args) > 1: node.target = torch.add node.args = (node.args[0], 0) gm.recompile() return gm inp = torch.ones(5) inp.requires_grad_(True) @dynamo.optimize(custom_backend) def foo(x): x = x * x return x.sum() y = foo(inp) print(y) y.backward() print(inp.grad) ``` Before, the script will finish but output an incorrect gradient. After the change, the accuracy minifier is triggered. Pull Request resolved: https://github.com/pytorch/pytorch/pull/89611 Approved by: https://github.com/ezyang
Author
Committer
Parents
Loading