transformers
25343aaf - Fix SDPA attention precision issue in Qwen2.5-VL (#37363)

Commit
258 days ago
Fix SDPA attention precision issue in Qwen2.5-VL (#37363) * solve conflicts and remove redundant attention_mask in qwenvit * update decoded text check * remove trailing whitespace
Author
Parents
Loading