llama.cpp
CUDA: fix logic for clearing padding with -ngl 0
#13320
Merged

CUDA: fix logic for clearing padding with -ngl 0 #13320

JohannesGaessler
github-actions github-actions added Nvidia GPU
github-actions github-actions added ggml
slaren
slaren commented on 2025-05-05
JohannesGaessler JohannesGaessler force pushed 258 days ago
JohannesGaessler CUDA: fix logic for clearing padding with -ngl 0
ac78a42a
JohannesGaessler JohannesGaessler force pushed to ac78a42a 258 days ago
slaren
slaren approved these changes on 2025-05-05
JohannesGaessler JohannesGaessler merged 90703650 into master 258 days ago
CISC

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone