transformers
344012b3 - [qwen2 vl] fix packing with all attentions (#39447)

Commit
159 days ago
[qwen2 vl] fix packing with all attentions (#39447) * fix qwen2 vl packing in FA2 * why? delete! * qwen2-5-vl seems to work now * update * fix tests * start by adapting FA2 tests * add similar tests for sdpa/eager * address comments * why is this even in conditional model and not base model?
Author
Parents
Loading