llama.cpp
Support requantizing models instead of only allowing quantization from 16/32bit
#1691
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
4
Changes
View On
GitHub
Loading