Support all_reduce a list of same-device tensors #21640 (#24949)
Summary:
addresses https://github.com/pytorch/pytorch/issues/21640 for CPU tensors and the Gloo backend.
Questions:
- ~~currently takes `AllreduceOptions`, since all of the options are the same. Would it be better to make a new `AllreduceCoalescedOptions` class?~~
- ~~I decided to inherit from `ProcessGroupGloo::AsyncWork` instead of `AsyncAllreduceWork` to shorten the inheritance chain a bit and for consistency with existing classes. However, this means that the two `getFunction` methods are copy-pasted. Would inheriting from `AsyncAllreduceWork` be preferable?~~
- ~~should the work class be named `AsyncCoalescedAllreduceWork` or `AsyncAllreduceCoalescedWork`?~~
thank you!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24949
Differential Revision: D17055580
Pulled By: mrshenli
fbshipit-source-id: e63b5fcaec6021053ea960776a09ee8cf11d1ec2