pytorch
4cf23c6a - FunctionalTensor: avoid spurious not_implemented logging during proxy tracing (#111040)

Commit
1 year ago
FunctionalTensor: avoid spurious not_implemented logging during proxy tracing (#111040) This is kind of hard to test, but I can try to add a test case if requested. I noticed locally that we now end up logging to the ProxyTensorMode and FakeTensorMode `not_implemented` logs in very simple compile examples: https://github.com/pytorch/pytorch/blob/main/torch/fx/experimental/proxy_tensor.py#L269 It was because `_mirror_autograd_meta_to()` indirectly queries sizes, and since modes have higher priority than subclasses, `aten::sym_sizes()` was getting dispatched to our modes before going to `FunctionalTensor.__torch_dispatch__`. This works out fine (they return NotImplemented and we eventually get to `FunctionalTensor`) but I figured we want to avoid cluttering up the logs. So I wrapped the calls with `FunctionalTensorMode`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/111040 Approved by: https://github.com/ezyang
Author
Committer
Parents
Loading