llama.cpp
ggml: WebGPU backend host improvements and style fixing
#14978
Merged

ggml: WebGPU backend host improvements and style fixing #14978

reeselevine merged 9 commits into ggml-org:master from reeselevine:master
reeselevine
reeselevine Add paramater buffer pool, batching of submissions, refactor command …
30ba139e
reeselevine Add header for linux builds
04d7b272
reeselevine Free staged parameter buffers at once
01c8ced2
reeselevine Format with clang-format
bfff27f1
github-actions github-actions added ggml
reeselevine
guokoni
guokoni commented on 2025-07-31
guokoni
reeselevine Fix thread-safe implementation
b8012ecc
reeselevine Use device implicit synchronization
cddda7e7
reeselevine Merge remote-tracking branch 'upstream/master' into fixes
1d5726a2
reeselevine Update workflow to use custom release
6a20e396
reeselevine Remove testing branch workflow
ea39068e
github-actions github-actions added devops
reeselevine
ggerganov
ggerganov approved these changes on 2025-08-02
reeselevine reeselevine merged 587d0118 into master 136 days ago
CISC
reeselevine

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone