llama.cpp
OpenCL: add attention sinks support for FA kernels
#15706
Merged

OpenCL: add attention sinks support for FA kernels #15706

lhez merged 1 commit into ggml-org:master from rmatif:add-attn-sinks-fa
rmatif
rmatif add attn sinks support for FA kernels
b27e95a7
rmatif rmatif requested a review from max-krasnyansky max-krasnyansky 68 days ago
rmatif rmatif requested a review from lhez lhez 68 days ago
github-actions github-actions added ggml
github-actions github-actions added OpenCL
lhez
lhez approved these changes on 2025-09-02
lhez lhez merged 97669e40 into master 66 days ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone