pytorch
ac1ece05 - [DDP][Grad compression] Fix fp16 cpp hook (#63375)

Commit
3 years ago
[DDP][Grad compression] Fix fp16 cpp hook (#63375) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/63375 I think tensor.copy_(tensor.to(torch::kFloat16)); will keep it as float32. Tested by add the following line: ``` LOG(INFO) << "Type is: " << compressed_tensor.scalar_type(); ``` before: ``` I0816 17:03:09.823688 364141 default_comm_hooks.cpp:21] Type is: Float ``` after: ``` I0816 17:01:16.779052 353924 default_comm_hooks.cpp:21] Type is: Half ``` ghstack-source-id: 136056092 Test Plan: ci Reviewed By: SciPioneer Differential Revision: D30356256 fbshipit-source-id: 8208a705acd7628541cd43c8bf61d007dfdd2435
Author
Parents
Loading