llama.cpp
Block interleaving support for Q4_K quantization for x86 AVX2 architecture
#12332
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
5
Changes
View On
GitHub
Loading