llama.cpp
py : use cpu-only torch in requirements.txt
#8335
Merged

Loading