llama.cpp
Fixed WSL cuda's OOM error
#1594
Merged

Fixed WSL cuda's OOM error #1594

slaren merged 3 commits into ggml-org:master from master
JoelSeniorLiang
In the function , add the cuda error bypass.
0fc61701
Merge branch 'master' of github.com:liangmanlai/llama.cpp
e09c67d1
ggerganov ggerganov requested a review from ggerganov ggerganov 2 years ago
JohannesGaessler
JohannesGaessler commented on 2023-05-26
JohannesGaessler
howard0su
howard0su commented on 2023-05-27
remove excessive codes and prints
0d308e2e
slaren
slaren approved these changes on 2023-06-11
slaren
slaren slaren merged 12b063f0 into master 2 years ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone