llama.cpp
0a11f8b7 - convert : fix RWKV v6 model conversion (#10913)

Commit
362 days ago
convert : fix RWKV v6 model conversion (#10913) * Enable --no-context-shift for llama-perplexity example Signed-off-by: Molly Sophia <mollysophia379@gmail.com> * RWKV 6: Fix error in ggml_cuda_op_bin_bcast Signed-off-by: Molly Sophia <mollysophia379@gmail.com> --------- Signed-off-by: Molly Sophia <mollysophia379@gmail.com>
Author
Parents
Loading