pytorch
df70e2fd - Refactor get analytical jacobian (#54049)

Commit
3 years ago
Refactor get analytical jacobian (#54049) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54049 The goal of this is to factor out the core logic of getting the analytical jacobian which is effectively doing `f(grad_out) = grad_out^T J = grad_input`. This allows us to test a lot of logic that was not possible before because now we can replace f with whatever we want in order to simulate potential issues that gradcheck is designed to catch. Edit: I realize a lot of things this PR was originally aiming to allow is actually possible with hooks, hence the tests have already been added in a earlier PR in the stack. But this is still slightly useful for reducing code duplication when adding the new fast gradcheck code (more details below) After this change, `get_analytical_jacobian` is only responsible for gathering a list of rows that are later combined into a single Jacobian tensor. This means we don't have to perform any checks for correctness of the dtypes/size at this step We factor out that logic into a separate function, `combine_jacobian_rows`, which handles the list of rows -> single Tensor step for each jacobian, and the error checking it entails. (This allows this code to be shared between the fast/slow versions.) Test Plan: Imported from OSS Reviewed By: ailzhang Differential Revision: D27307240 Pulled By: soulitzer fbshipit-source-id: 65bb58cda000ed6f3114e5b525ac3cae8da5b878
Author
Parents
Loading