llama.cpp
1b107b85 - ggml : generalize `quantize_fns` for simpler FP16 handling (#1237)

Commit
2 years ago
ggml : generalize `quantize_fns` for simpler FP16 handling (#1237) * Generalize quantize_fns for simpler FP16 handling * Remove call to ggml_cuda_mul_mat_get_wsize * ci : disable FMA for mac os actions --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Author
sw sw
Parents
Loading