pytorch
2dc1726a - Compile NestedTensor with AOTAutograd (#110529)

Commit
1 year ago
Compile NestedTensor with AOTAutograd (#110529) This PR has a number of changes that improve subclass support for AOTAutograd/Inductor in general: - previously if a subclass does extra aliasing between graph outputs/inputs in a way, the partitioner would complain because grad_outputs are the outputs reused as-is. Now we do a view_as(self) to workaround this. - Use dense -> dense metadata when working with fwd_output_strides during backward. This is important since the stride information comes from inductor which sees the dense to dense graph. - Inductor requires that the inputs to the compiled backward to match some expected strides computed during compilation. We make sure to make the inner tensors of the subclass contiguous (previously, we only made the subclass itself contiguous) Changes specific to NestedTensor relevant to compilation: - Properly handle the case where `__tensor_unflatten__` is passed non-symbolic dense tensors and with meta extracted from fake subclasses. - Skip var_to_range logic for singleton int - Skip size hint logic in inductor for singleton int Pull Request resolved: https://github.com/pytorch/pytorch/pull/110529 Approved by: https://github.com/bdhirsh
Author
Committer
Parents
Loading