FIX Beam search w/ mixed adapter batches & encoder (#2921)
When using mixed adapter batches (i.e. using different LoRA adapters in
the same batch), users have to pass adapter_names. When simultaneously
using beam search, these adapter names have to be extended by the number
of beams. For encoder-decoder models, even when applying beam search,
the encoder part of the model should, however, not use the extended
adapter_names. This is because the encoder still uses the original,
non-extended samples.
The need for this used to be checked by calling model.get_encoder().
However, with transformers v5, every PretrainedModel will have a
get_encoder method. The new convention is that it will return self if
there is no encoder. This is now what's being checked.
https://github.com/huggingface/transformers/pull/42156
Note that said PR contains a small bug that leads to self not always
being returned. Therefore, for the full fix of the issue on transformers
main, we also need to await this PR:
https://github.com/huggingface/transformers/pull/42295