transformers
be875640 - Add Youtu-LLM model (#43166)

Commit
1 day ago
Add Youtu-LLM model (#43166) * add Youtu-LLM model * add testing indicators in model test * [Bug] qwen2_5_omni: cap generation length to be less than the max_position_embedding in DiT (#43068) * qwen2_5_omni: make max_mel_frames an inference-time knob * not fail with raising ValueError, instead make it continue to run by choosing a target_duration that's capped and aligned * added unit tests for Token2WavShape shape mismatch Signed-off-by: Dong Wang <dongw2019@gmail.com> * make fixup * remove unit test which takes too much GPU memory Signed-off-by: Dong Wang <dongw2019@gmail.com> * reduce gpu memory usage from the unit test * addressed comments Signed-off-by: Dong Wang <dongw2019@gmail.com> --------- Signed-off-by: Dong Wang <dongw2019@gmail.com> * upgrade code quality according to latest main branch * correct unnecessary tokenizer annotation * resolve conflicts * modify redundant codes in modules, decompose test functions * fix typo * adapt to latest official codes * update dates * modfiy prefix * update dates * modify model_type and test path * update codes, as suggested by vasqu * fix modeling inconsistency * fix codes * update codes with inherits of config * fix docstring * modular * refactor tests * skip incompatible tests * rerun fix-repo * some last fixes --------- Signed-off-by: Dong Wang <dongw2019@gmail.com> Co-authored-by: Dong W <89223086+sniper35@users.noreply.github.com> Co-authored-by: vasqu <antonprogamer@gmail.com> Co-authored-by: Anton Vlasjuk <73884904+vasqu@users.noreply.github.com>
Author
Parents
Loading