llama.cpp
llama : use n_embd_head_v instead of n_embd_head_k when reshaping kqv
#7327
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
2
Changes
View On
GitHub
llama : use n_embd_head_v instead of n_embd_head_k when reshaping kqv
#7327
ggerganov
merged 2 commits into
ggml-org:master
from
fairydreaming:llm_build_kqv_fix
llama : use n_embd_head_v instead of n_embd_head_k when reshaping kqv
f15e933f
mofosyne
added
bugfix
mofosyne
added
Review Complexity : Medium
llama : use n_embd_v_gqa and n_embd_head_v instead of n_embd_k_gqa an…
886f89da
ggerganov
approved these changes on 2024-05-17
ggerganov
merged
27b04069
into master
1 year ago
fairydreaming
deleted the llm_build_kqv_fix branch
228 days ago
Login to write a write a comment.
Login via GitHub
Reviewers
ggerganov
Assignees
No one assigned
Labels
bugfix
Review Complexity : Medium
Milestone
No milestone
Login to write a write a comment.
Login via GitHub