llama.cpp
SYCL: Add gated linear attention kernel
#11175
Merged

Loading