llama.cpp
73cf442e - llama : fix Gemma-2 Query scaling factors (#8473)

Commit
1 year ago
llama : fix Gemma-2 Query scaling factors (#8473) * 9B - query_pre_attn_scalar = 256 not 224 See https://github.com/google/gemma_pytorch/commit/03e657582d17cb5a8617ebf333c1c16f3694670e Gemma 9b should use 256 and not 224 (self.config.hidden_size // self.config.num_attention_heads) * llama : fix Gemma-2 Query scaling factor ggml-ci --------- Co-authored-by: Daniel Han <danielhanchen@gmail.com>
Author
Parents
Loading