llama.cpp
f9c585f0
- Generalize quantize_fns for simpler FP16 handling
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
Generalize quantize_fns for simpler FP16 handling
References
#1237 - Generalize `quantize_fns` for simpler FP16 handling
Author
sw
Committer
sw
Parents
46088f72
Loading