GGUF compatible quantization (2, 3, 4 bit / any bit) #285
GGUF compatible quantization (2, 3, 4 bit)
a0cb9e57
Update example
b02263f6
casper-hansen
changed the title GGUF compatible quantization (2, 3, 4 bit) GGUF compatible quantization (2, 3, 4 bit / any bit) 2 years ago
Change default model to Mistral
8bbf7432
Merge branch 'main' into gguf
0b40094b
pack() utility function. rename gguf_compatible -> export_compatible.
c7eae1b2
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub