llama.cpp
8960efd0 - Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (#13607)

Commit
212 days ago
Vulkan: Add f32 accumulator support to quantized mul mat to fix GLM4 32B incoherence (#13607)
Author
Parents
Loading