peft
1a5d0f81 - FIX: Don't target the classification head when using target_modules="all-linear" (#2033)

Commit
1 year ago
FIX: Don't target the classification head when using target_modules="all-linear" (#2033) Fixes #2027 When using a transformers sequence classification model, target_modules="all-linear" should not wrap the classification head with LoRA. This is because it is already wrapped with ModulesToSave, i.e. it will be fully fine-tuned, which is the generally desired behavior. Before this bug fix, the classification head would be double-wrapped. With #2028, this now raises an error. With this PR, it is avoided completely. Still, keeping #2028 is good because it helps prevent other situations where double-wrapping might occur due to misconfiguration. Note that there is no fool-proof method to detect the classification head, we have to rely on transformers convention.
Parents
Loading