llama.cpp
CUDA: int8 tensor cores for MMQ (q4_K, q5_K, q6_K)
#7860
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
Loading