llama.cpp
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range
#5721
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
2
Changes
View On
GitHub
Loading