pytorch
161fd5f5 - Implement tensor.size(int) for BatchedTensor (#40028)

Commit
4 years ago
Implement tensor.size(int) for BatchedTensor (#40028) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/40028 We have this call native::size directly. Some alternatives I considered were: - Call VariableType::size directly. That seems isomorphic to what we're doing now. - when creating a BatchedTensor from a regular tensor, put all of the keys on that tensor into the BatchedTensor's dispatch key set and use the dispatcher fallthrough mechanism. That seems weird because BatchedTensor is a tensor wrapper and also error prone because if BatchedTensor gets the VariableType key, there's a chance that if something goes wrong, an autogradmeta gets created on it... Test Plan: - `./build/bin/vmap_test` Differential Revision: D22070655 Pulled By: zou3519 fbshipit-source-id: 18530579ad41f3c4f96589da41eb24a46caf7af9
Author
Parents
Loading