transformers
Fix flash-attn for paged_attention when no kernels
#41078
Merged

Fix flash-attn for paged_attention when no kernels #41078

remi-or merged 7 commits into huggingface:main from remi-or:fix-fa
remi-or
remi-or Fix non-kernels flash attention paged implementation
9a910f28
remi-or Cover all cases
77dbd8e8
remi-or Style
703b32a2
remi-or remi-or changed the title Fix fa Fix flash-attn for paged_attention when no kernels 102 days ago
HuggingFaceDocBuilderDev
MekkCyber
MekkCyber approved these changes on 2025-09-25
remi-or Update src/transformers/integrations/flash_paged.py
687cd7c4
remi-or Merge branch 'main' into fix-fa
c5b20dbc
MekkCyber
github-actions
github-actions[bot] Apply style fixes
1d8f0cda
remi-or Merge branch 'main' into fix-fa
1c8e6604
remi-or remi-or merged 97ca0b47 into main 99 days ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone