doc string fixed in torch.distributed.reduce_scatter (#84983)
Fixes #84865
Previous `torch.distributed.reduce_scatter`:
```
def reduce_scatter(output, input_list, op=ReduceOp.SUM, group=None, async_op=False):
"""
Reduces, then scatters a list of tensors to all processes in a group.
Args:
output (Tensor): Output tensor.
input_list (list[Tensor]): List of tensors to reduce and scatter.
group (ProcessGroup, optional): The process group to work on. If None,
the default process group will be used.
async_op (bool, optional): Whether this op should be an async op.
```
Fixed:
```
def reduce_scatter(output, input_list, op=ReduceOp.SUM, group=None, async_op=False):
"""
Reduces, then scatters a list of tensors to all processes in a group.
Args:
output (Tensor): Output tensor.
input_list (list[Tensor]): List of tensors to reduce and scatter.
op (optional): One of the values from
``torch.distributed.ReduceOp``
enum. Specifies an operation used for element-wise reductions
group (ProcessGroup, optional): The process group to work on. If None,
the default process group will be used.
async_op (bool, optional): Whether this op should be an async op.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84983
Approved by: https://github.com/H-Huang