llama.cpp
213701b5 - Detokenizer fixes (#8039)

Commit
349 days ago
Detokenizer fixes (#8039) * Add llama_detokenize(): - Update header files location - UNKNOWN and CONTROL are 'special pieces' - Remove space after UNKNOWN and CONTROL - Refactor llama_token_to_piece() - Add flag: clean_up_tokenization_spaces - Symmetric params for llama_tokenize() and llama_detokenize() * Update and fix tokenizer tests: - Using llama_detokenize() - Unexpected vocab type as test fail instead of error - Useful when automating tests: - If you don't know in advance the vocab type - Differenciate other loading errors - Skip unicode surrogaes and undefined - Gracefully exit threads - Using exit() is throwing random exceptions - Clean old known problematic codepoints - Minor: confusing hexadecimal codepoint * Update bruteforce random tests - Add detokenizer checks - New generator: ascii_lr_strip - New generator: apostrophe - Add more vocabs files - Detokenize special tokens. - Replace errors with '\uFFFD' when detokenizing to 'utf-8' - More edge cases - Better detokenization results check * Fix add_space_prefix, set false by default * Better leading space removal * Do not remove space when decoding special tokens * Bugfix: custom regexs splits undefined unicode codepoints * 'viking' detokenizer clean spaces
Author
Parents
  • common
    • File
      common.cpp
    • File
      common.h
  • examples
    • batched.swift/Sources
      • File
        main.swift
    • llama.swiftui/llama.cpp.swift
      • File
        LibLlama.swift
  • include
    • File
      llama.h
  • src
    • File
      llama.cpp
    • File
      unicode.cpp
  • tests
    • File
      test-tokenizer-0.cpp
    • File
      test-tokenizer-1-bpe.cpp
    • File
      test-tokenizer-1-spm.cpp
    • File
      test-tokenizer-random.py