transformers
b4b9da6d - tests: revert change of torch_require_multi_gpu to be device agnostic (#35721)

Commit
335 days ago
tests: revert change of torch_require_multi_gpu to be device agnostic (#35721) * tests: revert change of torch_require_multi_gpu to be device agnostic The 11c27dd33 modified `torch_require_multi_gpu()` to be device agnostic instead of being CUDA specific. This broke some tests which are rightfully CUDA specific, such as: * `tests/trainer/test_trainer_distributed.py::TestTrainerDistributed` In the current Transformers tests architecture `require_torch_multi_accelerator()` should be used to mark multi-GPU tests agnostic to device. This change addresses the issue introduced by 11c27dd33 and reverts modification of `torch_require_multi_gpu()`. Fixes: 11c27dd33 ("Enable BNB multi-backend support (#31098)") Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> * fix bug: modification of frozen set --------- Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com> Co-authored-by: Titus von Koeller <9048635+Titus-von-Koeller@users.noreply.github.com> Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Author
Parents
Loading