transformers
97ca0b47 - Fix flash-attn for paged_attention when no kernels (#41078)

Commit
97 days ago
Fix flash-attn for paged_attention when no kernels (#41078) * Fix non-kernels flash attention paged implementation * Cover all cases * Style * Update src/transformers/integrations/flash_paged.py Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> * Apply style fixes --------- Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Author
Parents
Loading