transformers
1c122a46 - Support dequantizing GGUF FP16 format (#31783)

Commit
1 year ago
Support dequantizing GGUF FP16 format (#31783) * support gguf fp16 * support gguf bf16 with pytorch * add gguf f16 test * remove bf16
Author
Parents
Loading