accelerate
6cf8221a - Don't manage `PYTORCH_NVML_BASED_CUDA_CHECK` when calling `accelerate.utils.imports.is_cuda_available()` (#2524)

Commit
1 year ago
Don't manage `PYTORCH_NVML_BASED_CUDA_CHECK` when calling `accelerate.utils.imports.is_cuda_available()` (#2524) * Don't manage PYTORCH_NVML_BASED_CUDA_CHECK PYTORCH_NVML_BASED_CUDA_CHECK will use an NVML-based check when determining how many devices are available. That's useful for preventing CUDA initialization when doing that check (or calling `torch.cuda.is_available()`). Instead of manipulating that env var, one can call the torch utility `_device_count_nvml` directly preventing the manipulation of the env var. * Uses env var instead of private torch function * Fixes flake8 check
Author
Parents
Loading