llama.cpp
vulkan: change graph_compute to be async and enable get_tensor_async
#17158
Merged

vulkan: change graph_compute to be async and enable get_tensor_async #17158

jeffbolznv
jeffbolznv jeffbolznv requested a review from 0cc4m 0cc4m 63 days ago
github-actions github-actions added Vulkan
github-actions github-actions added ggml
jeffbolznv vulkan: change graph_compute to be async and enable get_tensor_async
12ea0179
jeffbolznv fix thread safety errors
60bc85ca
jeffbolznv teardown context cleanly
924df57e
jeffbolznv jeffbolznv force pushed to 924df57e 63 days ago
jeffbolznv Handle async read to non-pinned dst
3343b605
0cc4m
0cc4m approved these changes on 2025-11-15
0cc4m 0cc4m merged 38eaf32a into master 59 days ago
h9j6k
0cc4m
h9j6k
jeffbolznv
h9j6k

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone