DeepSpeed
a02bc6e8 - fix(test): add bf16 model with fp32 grad_accum to supported configs

Commit
34 days ago
fix(test): add bf16 model with fp32 grad_accum to supported configs The test_ds_initialize.py::TestOptimizerImplementation test was missing the configuration (None, 'bf16', 'fp32') from its is_supported dict. This configuration (bf16 model with fp32 gradient accumulation, no ZeRO) is actually supported by DeepSpeed and uses FP16_Optimizer in bf16 mode. The test incorrectly expected NotImplementedError to be raised. Signed-off-by: Masahiro Tanaka <mtanaka@anyscale.com>
Author
Committer
Parents
Loading