Support attention_bias on LLaMA architecture #4283
Support attention_bias on LLaMA architecture
c48679a8
slaren
commented
on 2023-12-01
check existence of qkvo bias while loading llama models
e192572d
Update llama.cpp
b1efaed3
ggerganov
approved these changes
on 2023-12-01
ggerganov
merged
03562f3a
into master 2 years ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub