llama.cpp
3420909d - ggml : automatic selection of best CPU backend (#10606)

Commit
277 days ago
ggml : automatic selection of best CPU backend (#10606) * ggml : automatic selection of best CPU backend * amx : minor opt * add GGML_AVX_VNNI to enable avx-vnni, fix checks
Author
Parents
Loading