llama.cpp
SYCL: Add gated linear attention kernel
#11175
Merged

SYCL: Add gated linear attention kernel #11175

NeoZhangJianyu merged 3 commits into ggml-org:master from qnixsynapse:gla
qnixsynapse
qnixsynapse SYCL: Add Gated Linear attention kernel
1af85b51
qnixsynapse glahpp: add a space at the end of file
81d85290
github-actions github-actions added ggml
github-actions github-actions added SYCL
Alcpz
Alcpz requested changes on 2025-01-13
qnixsynapse gla: Put the barrier inside the main logic loop
db3ded2c
Alcpz
Alcpz approved these changes on 2025-01-13
Alcpz
qnixsynapse
Alcpz
qnixsynapse
NeoZhangJianyu
NeoZhangJianyu approved these changes on 2025-01-15
NeoZhangJianyu NeoZhangJianyu merged f446c2cf into master 352 days ago
qnixsynapse qnixsynapse deleted the gla branch 352 days ago
Rbiessy
qnixsynapse
NeoZhangJianyu

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone