diffusers
Fix: training resume from fp16 for SDXL Consistency Distillation
#6840
Merged

Fix: training resume from fp16 for SDXL Consistency Distillation #6840

sayakpaul merged 9 commits into huggingface:main from fix-fp16-train-resume-lcm-sdxl
sriagastya
sriagastya Fix: training resume from fp16 for lcm distill lora sdxl
b5322915
sriagastya sriagastya changed the title Fix: training resume from fp16 for lcm distill lora sdxl Fix: training resume from fp16 for SDXL Consistency Distillation 2 years ago
sayakpaul
HuggingFaceDocBuilderDev
sriagastya
sayakpaul
sayakpaul Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
53c0a072
sriagastya Fix coding quality - run linter
cbea2b13
sriagastya
sayakpaul
sriagastya
sayakpaul
sriagastya Merge branch 'huggingface:main' into fix-fp16-train-resume-lcm-sdxl
df94b622
sriagastya Fix 1 - shift mixed precision cast before optimizer
d5ed3352
sriagastya
sayakpaul
sriagastya Fix 2 - State dict errors by removing load_lora_into_unet
e6a1f827
sriagastya Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
efa505ee
sriagastya
sayakpaul
sayakpaul commented on 2024-02-08
sriagastya Update train_lcm_distill_lora_sdxl.py - Revert default cache dir to None
99e52902
sayakpaul
sayakpaul Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
c539ac7c
sriagastya
sayakpaul sayakpaul merged a11b0f83 into main 2 years ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone