Fix flash-attn for paged_attention when no kernels (#41078)
* Fix non-kernels flash attention paged implementation
* Cover all cases
* Style
* Update src/transformers/integrations/flash_paged.py
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
* Apply style fixes
---------
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>