llama.cpp
cuBLAS: fall back to pageable memory if pinned alloc fails
#1233
Merged

cuBLAS: fall back to pageable memory if pinned alloc fails #1233

slaren merged 2 commits into ggml-org:master from slaren:pinned-fallback
slaren
slaren cuBLAS: fall back to pageable memory if pinned alloc fails
08e539d5
Priestru
Priestru
slaren cuBLAS: do not use pinned memory if env variable GGML_CUDA_NO_PINNED …
476f46f7
ggerganov
ggerganov approved these changes on 2023-05-01
slaren slaren merged b925f1f1 into master 2 years ago
slaren slaren deleted the pinned-fallback branch 2 years ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone