llama.cpp
Add support for AVX VNNI
#4589
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
9
Changes
View On
GitHub
Commits
feat: add avx_vnni based on intel documents
tikikun
committed
2 years ago
ggml: add avx vnni based on intel document
tikikun
committed
2 years ago
llama: add avx vnni information display
tikikun
committed
2 years ago
docs: add more details about using oneMKL and oneAPI for intel processors
tikikun
committed
2 years ago
docs: add more details about using oneMKL and oneAPI for intel processors
tikikun
committed
2 years ago
docs: add more details about using oneMKL and oneAPI for intel processors
tikikun
committed
2 years ago
docs: add more details about using oneMKL and oneAPI for intel processors
tikikun
committed
2 years ago
docs: add more details about using oneMKL and oneAPI for intel processors
tikikun
committed
2 years ago
Update ggml.c
tikikun
committed
2 years ago
Loading