peft
eaab05e1 - Hotswap allow different alpha scalings and ranks (#2177)

Commit
350 days ago
Hotswap allow different alpha scalings and ranks (#2177) Hotswapping of LoRA adapters is already implemented, but when alpha scalings or ranks differ, this triggers recompilation of the model is compiled, which is inefficient. Users can now call prepare_model_for_compiled_hotswap to prevent recompilation in many cases (see the doc update for caveats).
Parents
Loading