llama.cpp
652c8496
- ggml : add is_ram_shared to ggml_backend
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
ggml : add is_ram_shared to ggml_backend Metal can share the RAM memory and can utilize mmap without temp buffer
Author
ggerganov
Parents
90503f15
Loading