Fix Cache.max_cache_len max value for Hybrid models #39737
fix gemma
40c604ba
fix min
d143de4e
fix quant init issue
404208a4
Merge branch 'main' of github.com:huggingface/transformers into max-c…
8aff7492
fix gemma 3n
ee1fe17c
Merge branch 'max-cache-len-fix' of https://github.com/manueldeprada/…
31b1bbef
skip quant cache test
82a2c5fc
fix modular
e4e6cc7b
new test for Gemma
ffb2c618
include cyril change
e3ca2a30
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub