diffusers
Fix: training resume from fp16 for SDXL Consistency Distillation
#6840
Merged

Fix: training resume from fp16 for SDXL Consistency Distillation #6840

sayakpaul merged 9 commits into huggingface:main from fix-fp16-train-resume-lcm-sdxl
asrimanth
asrimanth Fix: training resume from fp16 for lcm distill lora sdxl
b5322915
asrimanth asrimanth changed the title Fix: training resume from fp16 for lcm distill lora sdxl Fix: training resume from fp16 for SDXL Consistency Distillation 2 years ago
sayakpaul
HuggingFaceDocBuilderDev
asrimanth
sayakpaul
sayakpaul Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
53c0a072
asrimanth Fix coding quality - run linter
cbea2b13
asrimanth
sayakpaul
asrimanth
sayakpaul
asrimanth Merge branch 'huggingface:main' into fix-fp16-train-resume-lcm-sdxl
df94b622
asrimanth Fix 1 - shift mixed precision cast before optimizer
d5ed3352
asrimanth
sayakpaul
asrimanth Fix 2 - State dict errors by removing load_lora_into_unet
e6a1f827
asrimanth Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
efa505ee
asrimanth
sayakpaul
sayakpaul commented on 2024-02-08
asrimanth Update train_lcm_distill_lora_sdxl.py - Revert default cache dir to None
99e52902
sayakpaul
sayakpaul Merge branch 'main' into fix-fp16-train-resume-lcm-sdxl
c539ac7c
asrimanth
sayakpaul sayakpaul merged a11b0f83 into main 2 years ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone