llama.cpp
3420909d
- ggml : automatic selection of best CPU backend (#10606)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
277 days ago
ggml : automatic selection of best CPU backend (#10606) * ggml : automatic selection of best CPU backend * amx : minor opt * add GGML_AVX_VNNI to enable avx-vnni, fix checks
References
#10606 - ggml : automatic selection of best CPU backend
Author
slaren
Parents
86dc11c5
Loading