llama.cpp
Support multiple GPUs (split mode) on SYCL backend
#5806
Merged

Loading