Use pytorch utils to detect ninja (#7687)
This PR replaces the ninja detection based on `import ninja` with the
corresponding pytorch utils to detect ninja.
The motivation behind this change is twofold.
1. The detection based on importing the python interface works in the
pip-world, but fails in the conda-world, as the corresponding ninja
conda package does not include the python interface. The implication on
detection is
[known](https://github.com/conda-forge/ninja-feedstock/issues/26) and
the recommended solution is to use the `subprocess` module to detect
ninja. This approach is followed by, e.g.,
[meson](https://github.com/mesonbuild/meson-python/blob/1c8092dc477cbc7e1e4d40913608d9daae75f793/mesonpy/__init__.py#L1077-L1088)
and
[pytorch](https://github.com/pytorch/pytorch/blob/d33d125c9413c5043aa5f74fad909a576288242d/torch/utils/cpp_extension.py#L2312-L2325).
As the `subprocess`-based check works in both the pip- and the
conda-world, I think, it would make sense to switch over.
2. As ninja is only invoked through pytorch, it should be sufficient to
check if pytorch can find ninja. Hence, instead of reimplementing the
`subprocess` check, I think it would be better to use the pytorch utils
(which implement said test anyways).
Without this refactor, every conda environment that depends on DeepSpeed
will need to install ninja as a PyPI dependency, even though the conda
version of ninja would be sufficient for the compilation. In my opinion,
this adds unnecessary complexity to these environments.
I tried to keep the changes minimal.
As some additional context, @sdvillal and I stumbled over this issue
while working on packaging aqlaboratory/openfold-3 for conda-forge.
Signed-off-by: Tim Adler <tim.adler@bayer.com>
Co-authored-by: Olatunji Ruwase <tunji.ruwase@snowflake.com>