llama : fix attention layer count sanity check #6550
llama : fix attention layer count sanity check
68047141
llama : fix parentheses in attention layer count sanity check
7bab4c05
compilade
approved these changes
on 2024-04-08
ggerganov
merged
cc4a9542
into master 1 year ago
ggerganov
deleted the gg/quantize-mamba-assert branch 1 year ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub