llama.cpp
04aaae1d - add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)

Commit
2 years ago
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
Author
Parents
Loading