transformers
ae9dd02e
- Fix incorrect accelerator device handling for MPS in `TrainingArguments` (#31812)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
Fix incorrect accelerator device handling for MPS in `TrainingArguments` (#31812) * Fix wrong acclerator device setup when using MPS * More robust TrainingArguments MPS handling * Update training_args.py * Cleanup
References
#29969 - [SigLIP] Add fast tokenizer
#31812 - Fix incorrect accelerator device handling for MPS in `TrainingArguments`
#32831 - [Docs] Update resources
#33111 - [Backbone] Remove out_features everywhere
#33174 - [Zero-shot image classification pipeline] Remove tokenizer_kwargs
#39821 - Support MetaCLIP 2
#59 - Fix attention mask handling in EoMT-DINOv3 converter
#62 - Add initial DEIMv2 model implementation
#65 - Fix RTDetrV2 sine position embedding ordering
#43710 - [Docs] Add docs for GLM-OCR and fix EomT-DINOv3
#43729 - [Doc tests] Fix bug
Author
andstor
Parents
4879ac2b
Loading