Fix: training resume from fp16 for SDXL Consistency Distillation #6840
Fix: training resume from fp16 for lcm distill lora sdxl
b5322915
asrimanth
changed the title Fix: training resume from fp16 for lcm distill lora sdxl Fix: training resume from fp16 for SDXL Consistency Distillation 2 years ago
Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
53c0a072
Fix coding quality - run linter
cbea2b13
Merge branch 'huggingface:main' into fix-fp16-train-resume-lcm-sdxl
df94b622
Fix 1 - shift mixed precision cast before optimizer
d5ed3352
Fix 2 - State dict errors by removing load_lora_into_unet
e6a1f827
Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
efa505ee
Update train_lcm_distill_lora_sdxl.py - Revert default cache dir to None
99e52902
Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
c539ac7c
sayakpaul
merged
a11b0f83
into main 2 years ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub