[functorch] Selectively enable dispatch on kBatchedKey (pytorch/functorch#63)
This PR makes it so that dispatch on kBatchedKey only can happen if there are
tensors batched at the current level. Otherwise, kBatchedKey is excluded
(even if there are BatchedTensors!).
To find tensors batched at the current level, we check:
- all tensor arguments
- we peek into all TensorLists
- we peek into all Tensor?[].
the above bullet points should be sufficient.
Dispatch for kVmapModeKey is not affected.
Test Plan:
- run all tests
- removed the special case in dot_batch_rule and added a test