Make named tensor implementations more robust (#26968)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26968
To make implementations of an operator more robust, we should have a
separate "named area" where name propagation happens and an "unnamed
area" where the implementation is. Right now, many functions are
implemented without an "unnamed area". The problem with that is that if
someone modifies the implementation, it is very easy to break
namedtensor support by using a helper function that does not propagate
names correctly. The test coverage for named tensors is also
insufficient to catch such breakages.
This PR modifies some named tensor implementations to have separate
"named area" and "unnamed area". The following implementations were
changed:
- dropout, softmax, log_softmax, bernoulli
- dot, mm, addmm, addmv, mv
Test Plan: - [namedtensor ci]
Differential Revision: D17627920
Pulled By: zou3519
fbshipit-source-id: 9300ac3962219b1fcd8c4c8705a2cea6f8c9d23d