llama.cpp
(Bugfix, ggml-cuda) Pool alloc count fix + small size computation type adjustment
#18559
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
2
Changes
View On
GitHub
(Bugfix, ggml-cuda) Pool alloc count fix + small size computation type adjustment
#18559
JohannesGaessler
merged 2 commits into
ggml-org:master
from
pl752:pool_alloc_count_fix
CUDA: Fixed obj byte size instead of obj count being passed to pool a…
cca7749a
CUDA: Explicitly casted some of the int alloc counts before multiplic…
55096144
pl752
requested a review
from
JohannesGaessler
15 days ago
github-actions
added
Nvidia GPU
github-actions
added
ggml
JohannesGaessler
approved these changes on 2026-01-03
JohannesGaessler
merged
9dba9f53
into master
15 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
JohannesGaessler
Assignees
No one assigned
Labels
Nvidia GPU
ggml
Milestone
No milestone
Login to write a write a comment.
Login via GitHub