transformers
1c122a46
- Support dequantizing GGUF FP16 format (#31783)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Support dequantizing GGUF FP16 format (#31783) * support gguf fp16 * support gguf bf16 with pytorch * add gguf f16 test * remove bf16
References
#31783 - Support dequantizing GGUF FP16 format
Author
PenutChen
Parents
af0e4b7b
Loading