[functorch] (Grad)TensorWrapper sometimes has storage (and data_ptr) (pytorch/functorch#65)
TensorWrapper can have storage now:
- Case 1: If it wraps a tensor with storage, it will have storage
- Case 2: If it wraps a tensor without storage, it will not have storage
The rationale for this is to fix pytorch/functorch#7. When torch.tensor gets called, the
following happens:
- at::empty gets called
- some data from a PyObject* gets written directly into the new empty
tensor
The previous problem was that `at::empty` would return a TensorWrapper
wrapping a regular Tensor and that TensorWrapper did not have
storage/data_ptr.
It should be fine that TensorWrapper sometimes has storage. Users should
not write directly to the .data_ptr (because that would cause gradients
to be incorrect, but it is the same in regular PyTorch).
Test Plan:
- wait for tests