llama.cpp
Adding IQ2_S and IQ2_M to complete coverage of the 2-3 bit quantization range
#5721
Merged

Loading