llama.cpp
(Bugfix, ggml-cuda) Pool alloc count fix + small size computation type adjustment
#18559
Merged

(Bugfix, ggml-cuda) Pool alloc count fix + small size computation type adjustment #18559

pl752
CUDA: Fixed obj byte size instead of obj count being passed to pool a…
cca7749a
CUDA: Explicitly casted some of the int alloc counts before multiplic…
55096144
pl752 pl752 requested a review from JohannesGaessler JohannesGaessler 15 days ago
github-actions github-actions added Nvidia GPU
github-actions github-actions added ggml
JohannesGaessler
JohannesGaessler approved these changes on 2026-01-03
JohannesGaessler JohannesGaessler merged 9dba9f53 into master 15 days ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone