transformers
794fde7b - Fixing flex attention for torch=2.6.0 (#37285)

Commit
292 days ago
Fixing flex attention for torch=2.6.0 (#37285) * adding compile kwarg for torch 2.6 * fixing dynamic * addressing comment * typo * Update src/transformers/integrations/flex_attention.py --------- Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Parents
Loading