FIX Multiple adapters and modules_to_save (#1615)
Previously, we had the bug that if we had multiple adapters, some with
modules_to_save and others without, when trying to switch to an adapter
without modules_to_save, the ModulesToSaveWrapper would raise an error
because it cannot find that adapter. Now, when it detects this, it is
just disabled (so it uses the original weight).
Moreover, we had the issue that when we were using classes such as
PeftModelForSequenceClassification, we implicitly added the classifier
layers to model.modules_to_save. However, this would only add a new
ModulesToSaveWrapper instance for the first adapter being initialized.
When initializing a 2nd adapter via add_adapter, this information was
ignored. To fix this, I now update the peft_config.modules_to_save to
explicitly add the classifier layers. This is a departure from how this
worked previously, but I'm couldn't find a better way to ensure that
this bug was fixed.
Finally, there was a bug in add_weighted_adapters when we were merging
multiple adapters with modules_to_save. Previously, when we called
model.add_weighted_adapter, the LoRA weights were merged and a new
ModulesToSaveWrapper was added for the new adapter based on the first
LoraConfig of the two adapters. This ModulesToSaveWrapper is just a copy
of the original weights. Thus, when we switch to the newly merged
adapter, we just use the original weights for modules_to_save. This
doesn't make a lot of sense and is probably surprising for the user.
Now, we raise an error when we detect this to alert the user to this
fact.
Note that when only one of the adapters to be merged has a
modules_to_save, this does not raise an error, instead that module is
being used.