llama.cpp
9db44a2a
- fix: Fix resize vs reserve and skip null tensors in size computation
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
195 days ago
fix: Fix resize vs reserve and skip null tensors in size computation https://github.com/ggml-org/llama.cpp/pull/13979/files#r2149469788 Branch: HybridRecurrentCache Signed-off-by: Gabe Goodhart <ghart@us.ibm.com> Co-Authored-By: @younesbelkada
References
#13979 - Hybrid recurrent cache
Author
gabe-l-hart
Committer
gabe-l-hart
Parents
11cd80d5
Loading