llama.cpp
2a24c8ca - Add Nemotron/Minitron GGUF Conversion & Inference Support (#8922)

Commit
1 year ago
Add Nemotron/Minitron GGUF Conversion & Inference Support (#8922) * Add nemotron GGUF conversion & inference support * Fix formatting issues * Remove unnecessary write_tensors() * Update convert_hf_to_gguf.py Co-authored-by: compilade <git@compilade.net> * Update src/llama.cpp Co-authored-by: compilade <git@compilade.net> * Address comments by @compilade * Replace ggml_mul_mat()->llm_build_lora_mm() * Remove mutable variable * Use for bias tensors * Cover corner case for role_scaling not in config.json --------- Co-authored-by: compilade <git@compilade.net>
Author
Parents
Loading