llama.cpp
Fix kq_scale for the attention layers of PLaMo2
#14892
Merged

Loading