llama.cpp
OpenCL: add attention sinks support for FA kernels
#15706
Merged

Loading