llama.cpp
llama : fix empty batch causing llama_batch_allocr to crash
#9966
Merged

Commits
  • llama : fix empty batch cause llama_batch_allocr to crash
    ngxson committed 1 year ago
  • move batch_allocr inside decode/encode_internal
    ngxson committed 1 year ago
  • fix build
    ngxson committed 1 year ago
  • add GGML_ASSERT
    ngxson committed 1 year ago
  • Apply suggestions from code review
    ngxson committed 1 year ago
Loading