pytorch
e4e761b2 - record caller frame instead of function frame (#96882)

Commit
3 years ago
record caller frame instead of function frame (#96882) Previously, when starting to trace a function, we would record a frame summary recording the definition loc. This would lead to an unconventional-looking stack trace when used for debugging, e.g., shape guards. ``` File ".../scripts/avik/pt2/example.py", line 407, in forward def forward(self, x): ... File ".../transformers/models/bert/modeling_bert.py", line 912, in forward @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) ... File ".../transformers/models/bert/modeling_bert.py", line 562, in forward def forward( ... File ".../transformers/models/bert/modeling_bert.py", line 484, in forward def forward( ... File ".../transformers/models/bert/modeling_bert.py", line 416, in forward def forward( ... File ".../transformers/models/bert/modeling_bert.py", line 275, in forward def forward( ... File ".../transformers/models/bert/modeling_bert.py", line 351, in forward attention_scores = attention_scores + attention_mask ``` As noted in https://github.com/pytorch/pytorch/pull/95848#discussion_r1134397096, we would like to change this to record function calls instead, like conventional stack traces do. This diff makes this change. The above stack now looks like the following, which is way more helpful at a glance to understand what's going on. ``` File ".../scripts/avik/pt2/example.py", line 408, in forward bert_out = self.bert(**x) ... File ".../transformers/models/bert/modeling_bert.py", line 1021, in forward encoder_outputs = self.encoder( ... File ".../transformers/models/bert/modeling_bert.py", line 610, in forward layer_outputs = layer_module( ... File ".../transformers/models/bert/modeling_bert.py", line 496, in forward self_attention_outputs = self.attention( ... File ".../transformers/models/bert/modeling_bert.py", line 426, in forward self_outputs = self.self( ... File ".../transformers/models/bert/modeling_bert.py", line 351, in forward attention_scores = attention_scores + attention_mask ``` Differential Revision: [D44101882](https://our.internmc.facebook.com/intern/diff/D44101882/) Pull Request resolved: https://github.com/pytorch/pytorch/pull/96882 Approved by: https://github.com/ezyang
Author
Committer
Parents
Loading