llama.cpp
vulkan: support flash attention sinks
#15126
Merged

vulkan: support flash attention sinks #15126

0cc4m merged 1 commit into ggml-org:master from jeffbolznv:fattn_sinks
jeffbolznv
jeffbolznv vulkan: support fattn sinks
8eef2f33
jeffbolznv jeffbolznv requested a review from 0cc4m 0cc4m 65 days ago
github-actions github-actions added Vulkan
github-actions github-actions added ggml
0cc4m
0cc4m approved these changes on 2025-08-07
0cc4m 0cc4m merged c4f53563 into master 64 days ago
Art39print
Art39print approved these changes on 2025-08-09

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone