llama.cpp
c4ded1a8 - llama : make pos_bias contiguous for CUDA

Commit
1 year ago
llama : make pos_bias contiguous for CUDA
Author
Parents
Loading