llama.cpp
CUDA: skip fusion for repeating adds in bias
#17080
Merged

CUDA: skip fusion for repeating adds in bias #17080

am17an
am17an CUDA: skip fusion for repeating adds in bias
083a7f0c
am17an am17an requested a review from slaren slaren 1 day ago
github-actions github-actions added testing
github-actions github-actions added Nvidia GPU
github-actions github-actions added ggml
Green-Sky
am17an am17an requested a review from JohannesGaessler JohannesGaessler 1 day ago
JohannesGaessler
JohannesGaessler approved these changes on 2025-11-08
am17an am17an merged c1b18768 into master 1 day ago
am17an am17an deleted the cuda-fix-bias-fusion branch 1 day ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone