benchmark
d3ab9c6f - Fix: Dynamo log always emits ANSI color codes into torch_compile_debug/torchdynamo/debug.log due to colored=True in lazy_format_graph_code (#167823)

Commit
91 days ago
Fix: Dynamo log always emits ANSI color codes into torch_compile_debug/torchdynamo/debug.log due to colored=True in lazy_format_graph_code (#167823) Summary: Added ANSI escape sequence handling and a custom logging formatter. Please refer to https://github.com/pytorch/pytorch/issues/167812 for detailed background explanation. This PR adds a format for log_file_handler in dynamo logger to filter ANSI codes. Before this change, log in debug.log: ``` def forward(self, L_x_: "i64[][]cpu"): l_x_ = L_x_ # File: /Users/bytedance/Downloads/Repo/pytorch/mydebug1.py:11 in forward, code: a = torch.ones(2, x.item()) item: "Sym(s20 + 5)" = l_x_.item(); l_x_ = None a: "f32[2, s20 + 5][Max(1, s20 + 5), 1]cpu" = torch.ones(2, item) # File: /Users/bytedance/Downloads/Repo/pytorch/mydebug1.py:12 in forward, code: b = torch.ones(3, y.item() + 5) b: "f32[3, s20 + 5][Max(1, s20 + 5), 1]cpu" = torch.ones(3, item); item = None # File: /Users/bytedance/Downloads/Repo/pytorch/mydebug1.py:13 in forward, code: res = torch.cat([a, b], dim=0) res: "f32[5, s20 + 5][Max(1, s20 + 5), 1]cpu" = torch.cat([a, b], dim = 0); a = b = None # File: /Users/bytedance/Downloads/Repo/pytorch/mydebug1.py:14 in forward, code: return res.sum() sum_1: "f32[][]cpu" = res.sum(); res = None return (sum_1,) ``` After this change, log in debug.log: ``` def forward(self, L_x_: "i64[][]cpu"): l_x_ = L_x_ # File: /Users/bytedance/Downloads/Repo/pytorch/mydebug1.py:11 in forward, code: a = torch.ones(2, x.item()) item: "Sym(s20 + 5)" = l_x_.item(); l_x_ = None a: "f32[2, s20 + 5][Max(1, s20 + 5), 1]cpu" = torch.ones(2, item) # File: /Users/bytedance/Downloads/Repo/pytorch/mydebug1.py:12 in forward, code: b = torch.ones(3, y.item() + 5) b: "f32[3, s20 + 5][Max(1, s20 + 5), 1]cpu" = torch.ones(3, item); item = None # File: /Users/bytedance/Downloads/Repo/pytorch/mydebug1.py:13 in forward, code: res = torch.cat([a, b], dim=0) res: "f32[5, s20 + 5][Max(1, s20 + 5), 1]cpu" = torch.cat([a, b], dim = 0); a = b = None # File: /Users/bytedance/Downloads/Repo/pytorch/mydebug1.py:14 in forward, code: return res.sum() sum_1: "f32[][]cpu" = res.sum(); res = None return (sum_1,) ``` X-link: https://github.com/pytorch/pytorch/pull/167823 Approved by: https://github.com/angelayi Reviewed By: yangw-dev Differential Revision: D87297668 fbshipit-source-id: 1d2b6d08cc2a068954107515b564bf7f1c9f6e42
Author
generatedunixname499836121
Committer
Parents
Loading