fix: correct type annotations across config classes for @strict validation (#45007)
* fix: correct type annotations across config classes for @strict validation
Fix bool fields mistyped as int (would fail @strict validation):
- BigBird, Cohere2: use_cache: int → bool
- MBart, M2M100: scale_embedding: int → bool
- OLMo, OLMo2, OLMo3, OLMoE, PhiMoE, Eurobert, PaddleOCR-VL:
tie_word_embeddings: int → bool
- DAB-DETR, MVP, EncoderDecoder, SpeechEncoderDecoder:
is_encoder_decoder: int → bool
- Ernie4.5-MoE, Ernie4.5-VL-MoE: use_bias: int → bool
- Falcon-H1, Ernie4.5, PaddleOCR-VL: use_cache: int → bool
- Chameleon: attention_bias: int → bool
- GroundingDINO: two_stage: int → bool
- OmDet-Turbo: learn_initial_query: int → bool
Fix 336 dropout/rate/multiplier/scaling fields from bare float to
float | int across 163 config files and 14 modular source files.
This prevents @strict TypeError when hub configs store these values
as integers (e.g., dropout: 0 instead of dropout: 0.0).
Follows the existing pattern used by LlamaConfig, MistralConfig,
AlbertConfig, and DistilBertConfig which already use float | int.
Both generated configuration files and their modular source files
are updated to ensure make fix-repo consistency.
* fix: regenerate videomt config from modular source
Propagate float | int type annotations for hidden_dropout_prob and
drop_path_rate from the eomt modular parent to the generated
configuration_videomt.py file.
---------
Co-authored-by: Raushan Turganbay <raushan@huggingface.co>