llama.cpp
3d82dbcb - ggml : block interleaving support for Q4_K quantization for x86 AVX2 architecture (#12332)

Commit
277 days ago
ggml : block interleaving support for Q4_K quantization for x86 AVX2 architecture (#12332) * Add block interleaving support for Q4_K quantization * Remove whitespaces and fix CI/CD issues * Update pointer of bsums from int16_t to const int16_t * Add vector version of quantize_q8_K_4x8 function * Update code formatting based on review comments
Author
Parents
Loading