update schema to reflect aliasing behavior (#39794)
Summary:
Fixes: https://github.com/pytorch/pytorch/issues/38555
I did an audit of `native_functions.yaml` and found several functions in addition to `reshape` which were not reporting that they could alias:
```
torch.jit.script
def foo(t: torch.Tensor):
new_value = torch.tensor(1, dtype=t.dtype, device=t.device)
t.flatten()[0] = new_value
t.reshape(-1)[1] = new_value
t.view_as(t)[2] = new_value
t.expand_as(t)[3] = new_value
t.reshape_as(t)[4] = new_value
t.contiguous()[5] = new_value
t.detach()[6] = new_value
return t
```
Currently none of the values are assigned after dead code elimination, after this PR all are. (And the JIT output matches that of eager.)
I don't think this needs to be unit tested; presumably the generic machinery already is and this just brings these ops under the same umbrella.
**BC-breaking note**: This updates the native operator schema and the aliasing rules for autograd. JIT passes will no longer incorrectly optimize mutations on graphs containing these ops, and inplace ops on the result of `flatten` will now properly be tracked in Autograd and the proper backward graph will be created.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39794
Differential Revision: D22008358
Pulled By: robieta
fbshipit-source-id: 9d3ff536e58543211e08254a75c6110f2a3b4992