text-generation-inference
535149d8 - fix: only use eos_token_id as pad_token_id if int (#2774)

Commit
1 year ago
fix: only use eos_token_id as pad_token_id if int (#2774) LLama 3 has a list of values as eos_token_id: "['<|end_of_text|>', '<|eom_id|>', '<|eot_id|>']" This breaks tokenizer since it expects single value. This commit uses tokenizer.eos_token_id instead in such a case. Fixes: #2440 Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
Author
Parents
Loading