[PyTorch] Make NestedTensor::dim() work (#73679)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73679
We can update the TensorImpl state used to track dim() just fine.
I'm not sure if this is sustainable; do we *want* callers to be able to muck with nested_size_tensor_ directly?
ghstack-source-id: 150349610
Test Plan: Updated test_nestedtensor.
Reviewed By: cpuhrsch
Differential Revision: D34570523
fbshipit-source-id: 739555d63226f925d6a502c9c742ce5f431cb6cc
(cherry picked from commit 1bb188162f3639f26a6204ad5d40f73e4c664a6d)