llama.cpp
CUDA: MMQ support for iq4_nl, iq4_xs
#8278
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
CUDA: MMQ support for iq4_nl, iq4_xs
#8278
JohannesGaessler
merged 1 commit into
ggml-org:master
from
JohannesGaessler:cuda-iq-mmq-2
CUDA: MMQ support for iq4_nl, iq4_xs
382e3410
github-actions
added
Nvidia GPU
github-actions
added
python
mofosyne
added
Review Complexity : High
slaren
approved these changes on 2024-07-04
JohannesGaessler
merged
8e558309
into master
1 year ago
Login to write a write a comment.
Login via GitHub
Reviewers
slaren
Assignees
No one assigned
Labels
Nvidia GPU
python
Review Complexity : High
Milestone
No milestone
Login to write a write a comment.
Login via GitHub