[functorch] fix unsqueeze_ batching rule (#82899)
The old batching rule assumed that the tensor's bdim was at dimension 0.
This is not always the case.
Test Plan:
- tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82899
Approved by: https://github.com/Chillee