llama.cpp
Validate special token ids are in range when loading GGUF model
#3635
Merged

Validate special token ids are in range when loading GGUF model #3635

KerfuffleV2
KerfuffleV2 KerfuffleV2 marked this pull request as draft 2 years ago
staviq
KerfuffleV2
KerfuffleV2 KerfuffleV2 changed the title Valid special token ids are in range when loading GGUF model Validate special token ids are in range when loading GGUF model 2 years ago
KerfuffleV2 KerfuffleV2 marked this pull request as ready for review 2 years ago
cebtenzzre
cebtenzzre commented on 2023-10-16
KerfuffleV2
TheBloke
KerfuffleV2
ggerganov
ggerganov approved these changes on 2023-10-17
ggerganov
ggerganov commented on 2023-10-17
KerfuffleV2
KerfuffleV2 Add validation for special token ids to llama.cpp
d1075f6e
KerfuffleV2 Fix BPE newline check, only I could break something so simple
14be9d91
KerfuffleV2 Killll meeeeee
32383bbd
KerfuffleV2 Account for GGUF_KEY_KEY only setting when the key exists
4079668c
KerfuffleV2 Minor code cleanups.
22b914e0
KerfuffleV2 Fix convert.py error msg when added tokens are out of range
3a007e2c
KerfuffleV2 Make gguf SpecialVocab vocab size-aware
8796025b
KerfuffleV2 Avoid a string copy
76b05fc4
KerfuffleV2 KerfuffleV2 force pushed to 76b05fc4 2 years ago
KerfuffleV2
ggerganov ggerganov merged a5e7dbd6 into master 2 years ago
KerfuffleV2 KerfuffleV2 deleted the fix-handle-bad-special-token-ids branch 2 years ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone