llama.cpp
898aeca9
- llama : implement YaRN RoPE scaling (#2268)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
1 year ago
llama : implement YaRN RoPE scaling (#2268) Co-authored-by: cebtenzzre <cebtenzzre@gmail.com> Co-authored-by: Jeffrey Quesnelle <jquesnelle@gmail.com>
References
#2268 - llama: implement YaRN RoPE scaling
Author
cebtenzzre
Parents
c43c2da8
Files
15
common
common.cpp
common.h
convert-baichuan-hf-to-gguf.py
convert.py
examples
finetune
finetune.cpp
server
server.cpp
train-text-from-scratch
train-text-from-scratch.cpp
ggml-cuda.cu
ggml-metal.m
ggml-metal.metal
ggml.c
ggml.h
gguf-py/gguf
gguf.py
llama.cpp
llama.h
Loading