[Experimental] Diffusion LoRA DPO training #6422
add: experimental script for diffusion dpo training.
bd274f12
random_crop cli.
5332fba2
fix: caption tokenization.
9c7cb249
fix: pixel_values index.
09b390a7
fix: grad?
df609f03
debug
67305199
fix: reduction.
e6af686e
fixes in the loss calculation.
963bd2c1
style
174b0f35
fix: unwrap call.
e2ad9ce4
fix: validation inference.
90c8d393
Merge branch 'main' into dpo-training
b618d468
Merge branch 'main' into dpo-training
9b689290
Merge branch 'main' into dpo-training
11ce189e
add: initial sdxl script
bc5a8715
debug
7d230931
make sure images in the tuple are of same res
e0cd6530
fix model_max_length
e2a90d27
report print
14a309de
boom
14ed18f1
fix: numerical issues.
946efa5b
fix: resolution
31d60f60
comment about resize.
5ec4f515
change the order of the training transformation.
ab5efc2f
save call.
1784115a
debug
917a3a64
remove print
9c0e5e47
manually detaching necessary?
692181e2
use the same vae for validation.
cbdb3c7b
add: readme.
71f9ce06
Merge branch 'main' into dpo-training
682f8432
sayakpaul
merged
2a97067b
into main 1 year ago
sayakpaul
deleted the dpo-training branch 1 year ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub