Revert "stage3: efficient compute of scaled_global_grad_norm (#5256)" (#5461)
This reverts commit 54c06872647ca60699f752e60ac1643bd05aa63c due to
#5256 causing bugs when the ZeRO3 + ZeRO Offload features are enabled.
This bug was discovered due to failures in the DS Chat CI workflow.
Failing tests across CI failures:
| Failing Test Name |
| --- |
| test_ds_chat[zero3--offload-] |
| test_ds_chat[zero3--offload-lora] |
| test_ds_chat[zero3-he-offload-] |
| test_ds_chat[zero3-he-offload-lora] |
Error message:
```
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cpu!
```
It seems that `torch.stack()` or `torch.norm()` is having issues when
the offload feature is enabled and tensors are split between CPU/GPU,
however this is just an initial guess and would require more
investigation.
@nelyahu Since you are the original author of the PR, if you have some
bandwidth, any help here is greatly appreciated!
After reverting this commit, all tests pass in the DS Chat CI workflow:
https://github.com/microsoft/DeepSpeed/actions/runs/8824064414/job/24225802763
@tjruwase for context.