Implement name inference for torch.bmm (#25123)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25123
The approach is different for CPU and CUDA. In particular:
- in CPU, I added a name inference rule to bmm_out
- in CUDA, bmm calls THCTensor_(baddbmm) so I added a name inference
rule to that.
When one calls baddbmm on CPU or CUDA, it'll error out with NYI due to
named_guard: True on it in native_functions.yaml. I'm not planning on
implementing baddbmm soon because it's a little tricky to add it to CPU
and bmm is more commonly used function.
Test Plan
- new tests [namedtensor ci]
Test Plan: Imported from OSS
Differential Revision: D16998073
Pulled By: zou3519
fbshipit-source-id: 8dc01898964318717911f28eebd6cdfffc7dfcf2