Batched grad for advanced indexing (index) (#47223)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/47223
This PR enables batched gradient computation for advanced indexing.
Previously, the backward formula was writing parts of the grad tensori
in-place to zeros_like(self). Since grad is a BatchedTensor and self is
not a BatchedTensor, this is not possible.
To solve the problem, we instead create a new tensor with
`grad.new_zeros` and then write to that in-place. This new tensor will
have the same batchedness as the `grad` tensor.
To prevent regressions (the autograd codegen special cases zeros_like
to avoid saving the `self` tensor for backward), we teach the autograd
codegen how to save `self.options()`.
Test Plan:
- new tests
- run old indexing tests
Reviewed By: ejguan
Differential Revision: D24741684
Pulled By: zou3519
fbshipit-source-id: e267999dc079f4fe58c3f0bdf5c263f1879dca92