llama.cpp
metal : use F32 attention accumulators in FA kernels
#13975
Merged

Loading