peft
8d3039b6 - ENH Add LoRA multihead attention module (#1324)

Commit
360 days ago
ENH Add LoRA multihead attention module (#1324) For now, only works with _qkv_same_embed_dim=True. --------- Co-authored-by: Wang, Yi <yi.a.wang@intel.com> Co-authored-by: keakon <keakon@gmail.com> Co-authored-by: Zach Mueller <muellerzr@gmail.com> Co-authored-by: Saeid Ghafouri <sdghafouri@gmail.com> Co-authored-by: Fanli Lin <fanli.lin@intel.com> Co-authored-by: githubnemo <githubnemo@users.noreply.github.com>
Parents
Loading