llama.cpp
a743d76a - CUDA: generalize FP16 fattn vec kernel (#7061)

Commit
1 year ago
CUDA: generalize FP16 fattn vec kernel (#7061) * CUDA: generalize FP16 fattn vec kernel * disable unsupported head sizes for AMD in test * try AMD fix * fix batch size 2-8 * partially revert changes
Parents
Loading