llama.cpp
backend cpu: add online flow for aarch64 Q4_0 GEMV/GEMM kernels
#9921
Merged

Loading