llama.cpp
2891c8aa - Add support for BERT embedding models (#5423)

Commit
1 year ago
Add support for BERT embedding models (#5423) * BERT model graph construction (build_bert) * WordPiece tokenizer (llm_tokenize_wpm) * Add flag for non-causal attention models * Allow for models that only output embeddings * Support conversion of BERT models to GGUF * Based on prior work by @xyzhang626 and @skeskinen --------- Co-authored-by: Jared Van Bortel <jared@nomic.ai> Co-authored-by: Jared Van Bortel <cebtenzzre@gmail.com> Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
Author
Parents
  • .flake8
  • File
    convert-hf-to-gguf.py
  • examples/embedding
    • File
      embedding.cpp
  • gguf-py/gguf
    • File
      constants.py
    • File
      gguf_writer.py
    • File
      tensor_mapping.py
  • File
    llama.cpp
  • File
    llama.h