transformers
37fa1f65
- fix jamba slow foward for multi-gpu (#30418)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
fix jamba slow foward for multi-gpu (#30418) * fix jamba slow foward for multi-gpu * remove comm * oups * style
References
#29969 - [SigLIP] Add fast tokenizer
#30418 - fix jamba slow foward for multi-gpu
#32831 - [Docs] Update resources
#33111 - [Backbone] Remove out_features everywhere
#33174 - [Zero-shot image classification pipeline] Remove tokenizer_kwargs
#39821 - Support MetaCLIP 2
#58 - Add EoMT DINOv3 model
#59 - Fix attention mask handling in EoMT-DINOv3 converter
#41212 - Add EoMT with DINOv3 backbone
#62 - Add initial DEIMv2 model implementation
Author
SunMarc
Parents
5d64ae9d
Loading