Eager mode: implement resize_ operation (#12004)
Add support for PyTorch `resize_` operation. The PyTorch API method is documented
here:
https://pytorch.org/docs/stable/generated/torch.Tensor.resize_.html
Implementation notes:
There are some implementation details that might deviate from
expectations:
- As the Onnxruntime::tensor does not support resize operation, this
functionality is supported on the TensorImpl by swapping out the
backing tensor if the size changes.
- In the ORT model the shape of the TensorImpl is defined by the
backing onnxruntime::tensor, so it is not supported to have a
TensorImpl with a different shape / size than the backing
onnxruntime::tensor. This means when resizing to a smaller TensorImpl,
other implementations might keep the same backing storage, ORT will
re-allocate a new onnxruntime::tensor and copy over as many of the
existing elements that fit. Functionally, you will end up with same
output, but the underlying buffer will be re-allocated.
A future change could be to allow ORTTensorImpl to have a different
size / shape than the onnxrutime::tensor backing it, and then we
could improve this behavior.
The canonical CPU / CUDA implementations in PyTorch repository:
CPU: aten/src/ATen/native/Resize.cpp
CUDA: aten/src/ATen/native/cuda/Resize.cpp