diffusers
b9d52fca - [train_lcm_distill_lora_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env (#8446)

Comment changes are shownComment changes are hidden
Commit
1 year ago
[train_lcm_distill_lora_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env (#8446) fix num_train_epochs Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Author
Parents
  • examples/consistency_distillation
    • File
      train_lcm_distill_lora_sdxl.py