llama.cpp
3a077146
- llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (#14013)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
190 days ago
llama : allow using mmap without PrefetchVirtualMemory, apply GGML_WIN_VER to llama.cpp sources (#14013)
References
#14013 - llama : allow using mmap without PrefetchVirtualMemory
Author
slaren
Parents
d01d112a
Loading