llama.cpp
7c3f55c1 - Add support for encoder-only T5 models (#8900)

Commit
323 days ago
Add support for encoder-only T5 models (#8900) * gguf-py : add T5ENCODER model architecture * common : call llama_decode() during warmup only if the model has decoder * convert-hf : add T5EncoderModel * llama : add llama_model_has_decoder() API function * llama : split build_t5() into build_t5_encoder() and build_t5_decoder() * llama : add support for LLM_ARCH_T5ENCODER * llama-embedding : add support for LLAMA_POOLING_TYPE_NONE * llama-embedding : add support for encoder-only models --------- Co-authored-by: Stanisław Szymczyk <sszymczy@gmail.com>
Author
Parents
  • common
    • File
      common.cpp
  • File
    convert_hf_to_gguf.py
  • examples/embedding
    • File
      embedding.cpp
  • gguf-py/gguf
    • File
      constants.py
  • include
    • File
      llama.h
  • src
    • File
      llama.cpp