Exclude wrapper tensors from functorch in the native::resize_output fastpath (#61846)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61846
Related to #61485.
native::resize_output has a fast path that avoids dispatching.
Unfortunately, we have a number of CompositeImplicitAutograd operations
that directly call out= variants of operators. These
CompositeImplicitAutograd operators (e.g. torch.linalg.norm) end up
calling native::resize_output. That function, combined with how
functorch uses a mode-dispatch key to wrap tensors, causes silently
incorrect behavior in functorch (more details are available in #61485).
The very easy short-term fix is to have `native::resize_output` always
dispatch on a Tensor (and skip the fast-path) if a Tensor is a functorch
wrapped Tensor. More long-term fixes are proposed in the issue.
Test Plan:
- I checked that this change fixes torch.linalg.norm and other operators
with this problem in functorch.
- We're not testing functorch in pytorch/pytorch CI but we probably will
in the near future.
- wait for PyTorch tests.
Reviewed By: ezyang
Differential Revision: D29764293
Pulled By: zou3519
fbshipit-source-id: c7afcb0bd3bc77d2ba716d5b11f62830d8bdf0a9