llama.cpp
28baac9c - ci : migrate ggml ci to self-hosted runners (#16116)

Commit
2 days ago
ci : migrate ggml ci to self-hosted runners (#16116) * ci : migrate ggml ci to a self-hosted runners * ci : add T4 runner * ci : add instructions for adding self-hosted runners * ci : disable test-backend-ops from debug builds due to slowness * ci : add AMD V710 runner (vulkan) * cont : add ROCM workflow * ci : switch to qwen3 0.6b model * cont : fix the context size
Author
Parents
Loading