llama.cpp
1f17ea63
- speculative : fix KV cache management
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
speculative : fix KV cache management
References
#3228 - llama : custom attention mask + parallel decoding + no context swaps
Author
ggerganov
Parents
7c1bdd0e
Loading