llama.cpp
f446c2cf
- SYCL: Add gated linear attention kernel (#11175)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
269 days ago
SYCL: Add gated linear attention kernel (#11175) * SYCL: Add Gated Linear attention kernel * glahpp: add a space at the end of file * gla: Put the barrier inside the main logic loop
References
#11175 - SYCL: Add gated linear attention kernel
Author
qnixsynapse
Parents
b4d92a59
Loading