transformers
25343aaf
- Fix SDPA attention precision issue in Qwen2.5-VL (#37363)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
258 days ago
Fix SDPA attention precision issue in Qwen2.5-VL (#37363) * solve conflicts and remove redundant attention_mask in qwenvit * update decoded text check * remove trailing whitespace
References
#37363 - Fix SDPA attention precision issue in Qwen2.5-VL
Author
JJJYmmm
Parents
0e1c2817
Loading