llama.cpp
ggml webgpu: update Vulkan backend CI to use self-hosted runner
#21052
Open

ggml webgpu: update Vulkan backend CI to use self-hosted runner #21052

reeselevine wants to merge 7 commits into ggml-org:master from reeselevine:update-workflows
reeselevine
reeselevine Update workflows to remove dependence on llvmpipe
48a587ca
reeselevine reeselevine requested a review 1 day ago
github-actions github-actions added devops
reeselevine Try setting Dawn_DIR
d6ab81fe
reeselevine remove c++20 initializers
c095e1ec
reeselevine reeselevine requested a review 1 day ago
github-actions github-actions added ggml
github-actions github-actions added WebGPU
reeselevine Move to proper guid
d969d5eb
reeselevine reeselevine marked this pull request as draft 1 day ago
reeselevine Try avoiding segfaults on vulkan backend process exit
519eada3
reeselevine Remove compiler warnings on parameter casting
43685c75
reeselevine Fix soft_max and update reg_tile accumulation to f32 for better preci…
53ffd67d

Login to write a write a comment.

Login via GitHub

Reviewers
No reviews
Assignees
No one assigned
Labels
Milestone