llama.cpp
517b5ddb - CUDA: Improve flash decoding kernel GPU occupancy for BS=1 case (#12183)

Commit
205 days ago
CUDA: Improve flash decoding kernel GPU occupancy for BS=1 case (#12183) - Find out active blocks per SM using cudaOccupancyMaxActiveBlocksPerMultiprocessor API. Use this value to determine the optimal parallel_blocks value. - Prefer vector flash attention kernels over MMA kernel for BS=1 Fixes Issue: #12182 --------- Co-authored-by: Johannes Gäßler <johannesg@5d6.de>
Author
Parents
Loading