llama.cpp
CUDA: use mma FA kernel for gqa > 4 on RTX 4000
#15035
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
CUDA: use mma FA kernel for gqa > 4 on RTX 4000
#15035
JohannesGaessler
merged 1 commit into
ggml-org:master
from
JohannesGaessler:cuda-fa-kernel-choice
CUDA: use mma FA kernel for gqa > 4 on RTX 4000
069d410b
github-actions
added
Nvidia GPU
github-actions
added
ggml
ggerganov
approved these changes on 2025-08-02
JohannesGaessler
merged
03d46982
into master
217 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
ggerganov
Assignees
No one assigned
Labels
Nvidia GPU
ggml
Milestone
No milestone
Login to write a write a comment.
Login via GitHub