llama.cpp
fix: use `vm_allocate` to allocate CPU backend buffer on macOS
#9875
Merged

Loading