llama.cpp
5f5e39e1 - model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466)

Commit
267 days ago
model : Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture (#12466) * Nomic Embed Text V2 with Mixture-of-Experts (MoE) architecture - Adds MoE-based embedding model supporting multilingual embeddings. - Selects architecture variant based on hyperparameter detection (MoE layers). - Removes unnecessary subclass initialization checks for clarity. https://www.nomic.ai/blog/posts/nomic-embed-text-v2 Co-authored-by: Jared Van Bortel <jared@nomic.ai> * fix tokenizer * don't rename this tensor --------- Co-authored-by: Jared Van Bortel <jared@nomic.ai>
Author
Parents
Loading