llama.cpp
6ab116ac
- move batch_allocr inside decode/encode_internal
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
move batch_allocr inside decode/encode_internal
References
#9966 - llama : fix empty batch causing llama_batch_allocr to crash
Author
ngxson
Parents
bd697ca7
Loading