llama.cpp
ggml: avoid creating CUDA context during device init
#20595
Merged

ggml: avoid creating CUDA context during device init #20595

ServeurpersoCom
ServeurpersoCom ggml: avoid creating CUDA context during device init
15f4c938
ServeurpersoCom ServeurpersoCom requested a review 21 days ago
am17an
ServeurpersoCom
am17an
am17an approved these changes on 2026-03-15
JohannesGaessler
JohannesGaessler approved these changes on 2026-03-15
tehsiuhuang
ServeurpersoCom
ServeurpersoCom
tehsiuhuang
am17an am17an merged ceef6b52 into master 21 days ago
github-actions github-actions added Nvidia GPU
github-actions github-actions added ggml

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone