Store SymInt out of line (#84390)
swolchok reported that non-tracing usage of Tensor we are wasting a lot
of time on is_symbolic() tests, e.g., when destructing SymInts. This
is a regression for no good reason because we don't actually ever
have SymInts in those cases. This PR moves the stored SymInts on
Tensor out of line, into a separate ExtraMeta struct, which is only
allocated when we make a Tensor store symbolic sizes/strides.
To avoid adding another word to TensorImpl, I take over the named tensor
metadata field. This makes named tensor require a double indirection
and use up more space, but it's OK since we're going to delete this
feature anyway soon.
I restore regular int64_t storage on Tensor. This entailed reverting
https://github.com/pytorch/pytorch/pull/82467 ; there are no other
substantive changes to SizesAndStrides so a close review is not
necessary.
I don't bother optimizes sizes and strides in ExtraMeta in the same
way stock tensor is optimized. I add a SymDimVector alias. I make
SymInt UNCHECKED constructor public as it is a useful optimization
in some situations when the int is known to be positive.
I thought about storing the SymInts on the Python object instead.
However, because we can allocate symbolic shape tensors directly
from C++, we cannot guarantee that there is a PyInterpreter for
a Tensor. So we do it this way instead; it's also faster since you
don't have to take out the GIL to do accesses.
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84390
Approved by: https://github.com/swolchok, https://github.com/Krovatkin