[LoRA] feat: add lora attention processor for pt 2.0. #3594
feat: add lora attention processor for pt 2.0.
bfcd0ad1
explicit context manager for SDPA.
53c51997
switch to flash attention
00df0a41
make shapes compatible to work optimally with SDPA.
71b8ad27
resolve conflicts.
4142edb7
fix: circular import problem.
bf60598c
explicitly specify the flash attention kernel in sdpa
e5fad840
fall back to efficient attention context manager.
15190752
remove explicit dispatch.
a193d265
fix: removed processor.
7898c112
fix: remove optional from type annotation.
68674274
feat: make changes regarding LoRAAttnProcessor2_0.
4d3afd2d
remove confusing warning.
b694e3f9
formatting.
ffb136d8
relax tolerance for PT 2.0
8c304bca
fix: loading message.
9d12c34a
remove unnecessary logging.
3c3c2f7b
sayakpaul
marked this pull request as ready for review 2 years ago
add: entry to the docs.
ba3f7ad2
merge main and resolve conflicts.
5017e922
Merge branch 'main' into feat/lora-attn-pt2
06e90167
add: network_alpha argument.
0c764515
relax tolerance.
b13c5df9
sayakpaul
merged
8669e831
into main 2 years ago
sayakpaul
deleted the feat/lora-attn-pt2 branch 2 years ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub