llama.cpp
3ba12fed - kv-cache : extend cache quantization checks (#21586)

Commit
29 days ago
kv-cache : extend cache quantization checks (#21586) to also check for enabled flash attention, instead of just auto.
Author
Parents
Loading