Impose maximum level restriction for BatchedTensors (#39580)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/39580
We support 64 total levels. This is done so that we can represent lists
of levels as a bitset that fits into a single `int64_t` and is a
reasonable upper bound because we only support (physical) tensors of up
to 64 dimensions with vmap (see kVmapMaxTensorDims).
Test Plan:
`./build/bin/vmap_test`. One day we'll test this with the vmap Python
API.
Differential Revision: D21929249
Pulled By: zou3519
fbshipit-source-id: 2e99c0c519d6ab0c063fda20f4a0b1f53da6d450