transformers
7ca46335
- [FlaxSpeechEncoderDecoderModel] Ensure Input and Output Word Embeddings Are **Not** Tied (#16444)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
4 years ago
[FlaxSpeechEncoderDecoderModel] Ensure Input and Output Word Embeddings Are **Not** Tied (#16444) * [FlaxSpeechEncoderDecoderModel] Ensure Input and Output Word Embeddings Are **Not** Tied * rebase
References
#16444 - [FlaxSpeechEncoderDecoderModel] Ensure Input and Output Word Embeddings Are **Not** Tied
#19449 - [WIP] Fix weights initialization of several vision models
#27720 - Add common processor tests
#29969 - [SigLIP] Add fast tokenizer
#32831 - [Docs] Update resources
#33111 - [Backbone] Remove out_features everywhere
#33174 - [Zero-shot image classification pipeline] Remove tokenizer_kwargs
#59 - Fix attention mask handling in EoMT-DINOv3 converter
#62 - Add initial DEIMv2 model implementation
#65 - Fix RTDetrV2 sine position embedding ordering
#44320 - Add SAM3-LiteText
#44375 - Add RF-DETR
#71 - Use Mask2Former ignore_value in mask matching and losses
#44385 - Fix make check-repo
#45082 - [VidEoMT] Update conversion script
#45110 - Add SAM 3.1
Author
sanchit-gandhi
Parents
e0ac72b7
Loading