Transformers 4.56/4.57 support #1529
transformers 4.57
02f9c501
patch dynamic cache layer
c68919f1
fix qwen and gpt_oss
073fc464
fix seq2seq models as well
5b245cfc
fix
513977a9
fix
43d58427
more decoder fixes
79a0bbfb
limit awq
bc57cecf
fix dynamic layer in optimum-onnx's model patcher
6489d7e2
remove
11b5a5a9
fix donut
d6cd7a60
vlm fixes
272a624d
fix speecht5
c62546e1
fix whisper
a7ede396
fix
817bc540
fix qwenvl
225b81d0
better fix
7c5c92c4
fix recursion issue
91162623
fix llama4 and quantization
f4591a72
fix setup
a5029bd9
fix gemma3 and skip grouped beam search
d1449c61
fix
a6794113
fix quants
25d2f667
fix
bfcf961d
echarlaix
approved these changes
on 2025-12-01
fix
b714f6d7
revert line
3ca93c8c
test offline on python 3.10
20250f68
ov 2025.4.0
e5d2dc65
fix
ad94d8fa
simply skip phi4
99372b87
Apply suggestion from @IlyasMoutawwakil
d14416b7
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub