llama.cpp
a9b5fe98 - fix: Fix logic for initializing inputs and attn layers for hybrid caches

Commit
197 days ago
fix: Fix logic for initializing inputs and attn layers for hybrid caches Branch: GraniteFour Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
Author
Committer
Parents
Loading