llama.cpp
Fix incorrect prompt tokenization in speculative example
#4025
Merged

Fix incorrect prompt tokenization in speculative example #4025

AutonomicPerfectionist
AutonomicPerfectionist
cebtenzzre
AutonomicPerfectionist
ggerganov
ggerganov approved these changes on 2023-11-17
AutonomicPerfectionist Support special tokens and not adding BOS to prompt in speculative
c2d00417
AutonomicPerfectionist AutonomicPerfectionist force pushed from 5651be51 to c2d00417 1 year ago
KerfuffleV2
KerfuffleV2 requested changes on 2023-11-18
AutonomicPerfectionist Adapt to new should_add_bos function
e778ce4a
AutonomicPerfectionist Ensure tgt and dft have same add_bos setting
9cfc5e21
AutonomicPerfectionist AutonomicPerfectionist requested a review from KerfuffleV2 KerfuffleV2 1 year ago
AutonomicPerfectionist AutonomicPerfectionist requested a review from ggerganov ggerganov 1 year ago
KerfuffleV2
KerfuffleV2 approved these changes on 2023-11-18
ggerganov ggerganov merged 40a34fe8 into master 1 year ago
AutonomicPerfectionist AutonomicPerfectionist deleted the fix-speculative-prompt-processing branch 1 year ago

Login to write a write a comment.

Login via GitHub

Assignees
No one assigned
Labels
Milestone