llama.cpp
llama : fix empty batch causing llama_batch_allocr to crash
#9966
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
5
Changes
View On
GitHub
Commits
llama : fix empty batch cause llama_batch_allocr to crash
ngxson
committed
1 year ago
move batch_allocr inside decode/encode_internal
ngxson
committed
1 year ago
fix build
ngxson
committed
1 year ago
add GGML_ASSERT
ngxson
committed
1 year ago
Apply suggestions from code review
ngxson
committed
1 year ago
Loading