llama.cpp
b957b8f5 - cuda : add flash_attn kernel (wip)

Commit
1 year ago
cuda : add flash_attn kernel (wip)
Author
Committer
Parents
Loading