Add `vectorize` flag to torch.autograd.functional.{jacobian, hessian} (#50915)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50915
Fixes #50584
Add a vectorize flag to torch.autograd.functional.jacobian and
torch.autograd.functional.hessian (default: False). Under the hood, the
vectorize flag uses vmap as the backend to compute the jacobian and
hessian, respectively, providing speedups to users.
Test Plan:
- I updated all of the jacobian and hessian tests to also use
vectorized=True
- I added some simple sanity check tests that check e.g. jacobian with
vectorized=False vs
jacobian with vectorized=True.
- The mechanism for vectorized=True goes through batched gradient
computation. We have separate tests for those (see other PRs in this
stack).
Reviewed By: heitorschueroff
Differential Revision: D26057674
Pulled By: zou3519
fbshipit-source-id: a8ae7ca0d2028ffb478abd1b377f5b49ee39e4a1