llama.cpp
858f6b73 - Add an option to build without CUDA VMM (#7067)

Commit
1 year ago
Add an option to build without CUDA VMM (#7067) Add an option to build ggml cuda without CUDA VMM resolves https://github.com/ggerganov/llama.cpp/issues/6889 https://forums.developer.nvidia.com/t/potential-nvshmem-allocated-memory-performance-issue/275416/4
Parents
Loading