pytorch
7038579c - Add batching rule for unsqueeze, squeeze, and transpose (#40455)

Commit
4 years ago
Add batching rule for unsqueeze, squeeze, and transpose (#40455) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40455 These don't need to be implemented right now but are useful later down the line. I thought I would use these in implementing vmap's `out_dims` functionality, but it turns out they weren't necessary. Since the code exists and is useful anyways, I am leaving this PR here. Test Plan: - `./build/bin/vmap_test`. We could test this using the vmap frontend API, but there is the catch that vmap cannot directly take integers right now (all inputs passed to vmap must be Tensors at the moment). It's possible to hack around that by declaring lambdas that take in a single tensor argument, but those don't look nice. Differential Revision: D22216167 Pulled By: zou3519 fbshipit-source-id: 1a010f5d7784845cca19339d37d6467f5b987c32
Author
Parents
Loading