llama.cpp
434b2a1f - ggml-webgpu: add Q1_0 support (#22374)

Commit
17 days ago
ggml-webgpu: add Q1_0 support (#22374) * add fast matmul matvec q1_0 kernel * ggml-webgpu: drop redundant zero-fills in Q1_0 shmem init
Author
Parents
Loading