llama.cpp
ggml-cuda: use passed ops instead of hardcoded ops
#16712
Merged

ggml-cuda: use passed ops instead of hardcoded ops #16712

am17an
am17an ggml-cuda: use passed ops instead of hardcoded ops
94ba8b3c
am17an am17an requested a review from slaren slaren 57 days ago
github-actions github-actions added Nvidia GPU
github-actions github-actions added ggml
am17an am17an requested a review from JohannesGaessler JohannesGaessler 57 days ago
JohannesGaessler
JohannesGaessler approved these changes on 2025-10-23
am17an am17an merged 061f0eff into master 56 days ago
am17an am17an deleted the cuda-refactor-can-fuse branch 55 days ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone