transformers
Fix GPT2 attention scaling ignored in SDPA/FlashAttention
#44397
Merged

Loading