llama.cpp
970b5ab7 - ggml-cuda : add TQ2_0 support

Commit
1 year ago
Loading