[Qwen2.5-VL] Fix torch.finfo() TypeError for integer attention_mask_tensor #39333
Update modeling_qwen2_5_vl.py
bc05b0a6
Fix dtype compatibility in attention mask processing
051fcacd
Update modeling_qwen2_5_vl.py
82e6f4d0
Update modeling_qwen2_5_vl.py
3e43bd07
Merge branch 'main' into main
efa6d716
Fix: Cast to float before applying torch.finfo
7e7a45a7
Merge branch 'main' into main
f49accd2
# Fix: Use appropriate function based on dtype
e05fb988
Update modular_qwen2_5_vl.py
cf322ead
Fix: Cast to float before applying torch.finfo
daa6043b
Fix: Use appropriate function based on dtype
4ff82aad
Merge pull request #1 from dsnsabari/new-fixes-patch2
0ef332db
Fix: Use appropriate function based on dtype
cbc39ba0
Updatet modeling_glm4v.py
89fddcd4
Only apply conversion for floating point tensors (inverted masks)
a7416f30
corrected the format issue
4cd19aa1
Merge pull request #2 from dsnsabari/dsnsabari-patch-3
20f2fba8
Fix: Cast to float before applying torch.finfo
562b838f
Fix torch.finfo() for integer attention mask
005612ef
Merge pull request #3 from dsnsabari/dsnsabari-patch-3-1
6d776caa
Run make fix-copies and make style for CI compliance
b77db1ab
Fix torch.finfo() TypeError for
0585a623
Merge branch 'main' into main
656d5361
Merge branch 'patch-5' of https://github.com/dsnsabari/transformers i…
9f6f8d52
Merge pull request #4 from dsnsabari/patch-5
bc398c5f
Fix torch.finfo() TypeError for integer
1c9a383e
Merge pull request #5 from dsnsabari/qwen2_vl-patch-6
b117a574
Merge branch 'main' into main
17997cd1
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub