llama.cpp
CUDA: mul_mat_q=true as default for llama_context_params
#2912
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
CUDA: mul_mat_q=true as default for llama_context_params
#2912
JohannesGaessler
merged 1 commit into
ggml-org:master
from
JohannesGaessler:cuda-mmq-default-2
CUDA: mul_mat_q=true llama_context_params default
be1ddb14
slaren
approved these changes on 2023-08-30
JohannesGaessler
merged
8afe2280
into master
2 years ago
JohannesGaessler
deleted the cuda-mmq-default-2 branch
1 year ago
Login to write a write a comment.
Login via GitHub
Reviewers
slaren
Assignees
No one assigned
Labels
None yet
Milestone
No milestone
Login to write a write a comment.
Login via GitHub