text-generation-inference
66914f7b - fix: LlamaTokenizerFast to AutoTokenizer at flash_mistral.py (#1637)

Commit
1 year ago
fix: LlamaTokenizerFast to AutoTokenizer at flash_mistral.py (#1637) # What does this PR do? A few cases where you're using a mistral structure or mixtral structure but not a llama tokenizer, why not make it to call the AutoTokenizer in exception handling. Similar PR #619 @Narsil
Author
Parents
Loading