llama.cpp
afbb4c13 - ggml-cuda: Adding support for unified memory (#8035)

Commit
1 year ago
ggml-cuda: Adding support for unified memory (#8035) * Adding support for unified memory * adding again the documentation about unified memory * refactoring: Moved the unified memory code in the correct location. * Fixed compilation error when using hipblas * cleaning up the documentation * Updating the documentation Co-authored-by: Johannes Gäßler <johannesg@5d6.de> * adding one more case where the PR should not be enabled --------- Co-authored-by: matteo serva <matteo.serva@gmail.com> Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Author
Parents
Loading