llama.cpp
ggml webgpu: update Vulkan backend CI to use self-hosted runner
#21052
Open

Loading