pytorch
faa7d379 - [DDP] Support not all outputs used in loss calculation (#57081)

Commit
3 years ago
[DDP] Support not all outputs used in loss calculation (#57081) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57081 Changes in this diff: Enable passthrough autograd function when find_unused_parameters=True. With above, move prepare_for_backward which does unused parameter checking logic to beginning of backwards pass, only when find_unused_parameters=True. Enhance process of unused parameter checking to account for outputs not being used in loss. The way (3) is implemented is by triggering the autograd hook corresponding to parameters that did not participate in loss computation. Since they did not participate, the autograd hook is triggered with a gradient of None, and the reducer handles this appropriately to ensure that the gradient is not touched. Tested by ensuring that when a model output is not used in loss, the corresponding grad is not modified. Also verified that the grads are the same in local vs DDP training case. Also verified that gradients are not touched in this case, i.e. if grad is originally None, it stays as None, not zero, after. Note that in this diff we are not enabling the pass through autograd function for regular case find_unused_parameters=False because that has a much bigger blast radius and needs additional careful analysis especially with regard to the performance. ghstack-source-id: 129425139 Test Plan: CI Reviewed By: zhaojuanmao Differential Revision: D28048628 fbshipit-source-id: 71d7b6af8626804710017a4edd753787aa9bba61
Author
Parents
Loading