Initial support - model loads, generates random stuff
27467a5c
Remove gpt neox references
80b2e729
Fixes suggested by @mmnga
605e701c
Make stablelm conversion script use .safetensors
1ee5cc30
Remove random junk print
f1dd430f
Fix model conversion script
4fbce390
Use ggml_norm not ggml_rms_norm
a71041a0
Fix rope parameters
76b4495c
Fix formatting in gguf.py
e167ebcb
Fix formatting in llama.cpp
839a1838
Galunid
marked this pull request as ready for review 1 year ago
Merge branch 'master' into stablelm-support
db09c028
batch.seq_id[j] -> batch.seq_id[j][0]
01533762
Fix added_tokens crashes
e3990505
Merge branch 'master' into stablelm-support
cf5eff36
Add tests for stablelm tokenizer
a92fd2d7
Update readme with stablelm support
d9c03323
Add special token handling to conver script
fa2cd7e7
Merge branch 'master' into stablelm-support
27d0c118
Prevent offloading of more than 33 layers
51b3b56c
Make convert script with pytorch files
a00bb06c
Merge branch 'master' into stablelm-support
8917767f
Update after #3382
c9593764
Merge branch 'master' into stablelm-support
698c9459
LLAMA_BACKEND_OFFLOAD* -> llama_backend_offload*
4713a40c
Merge branch 'master' into stablelm-support
2f415527
Update conversion script to convert-hf-to-gguf.py
6be33567
Use ggml_view_3d
a371a8b6
Cleanup for review
e87d7094
Add vision model support
9e035cda
ggerganov
approved these changes
on 2023-11-11
Duh - add llava in another place
047032d6
Make qrot, krot contiguous
be2ac38a
Merge branch 'master' into stablelm-support
beb17a7d
Fix gguf post merge
853fe042
Green-Sky
approved these changes
on 2023-11-14
Galunid
merged
36eed0c4
into master 1 year ago
Galunid
deleted the stablelm-support branch 1 year ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub