llama.cpp
04aaae1d
- add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
add avx2 for dot_q8_0_q8_0, 2x faster than scalar (#1211)
References
#1211 - add avx2 for dot_q8_0_q8_0, 2x faster than scalar
Author
YannFollet
Parents
0b2da205
Loading