llama.cpp
38b16dfc - metal : bug-fix when enable ggml-alloc (#2757)

Commit
1 year ago
metal : bug-fix when enable ggml-alloc (#2757) * metal: better memory alloc w/ concurrency dispatch The ggml-alloc should only free tensors at memory barriers. * ggml-alloc: avoid return silently In certain cases, the allocate_node() function may silently return without performing any memory allocation.
Author
Parents
  • File
    ggml-alloc.c
  • File
    llama.cpp