llama.cpp
7a74dee6
- llama : temporary disable Q6_K output quantization (#1711)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
llama : temporary disable Q6_K output quantization (#1711)
Author
ggerganov
Parents
590250f7
Loading