[Flux LoRA] fix issues in flux lora scripts (#11111)
* remove custom scheduler
* update requirements.txt
* log_validation with mixed precision
* add intermediate embeddings saving when checkpointing is enabled
* remove comment
* fix validation
* add unwrap_model for accelerator, torch.no_grad context for validation, fix accelerator.accumulate call in advanced script
* revert unwrap_model change temp
* add .module to address distributed training bug + replace accelerator.unwrap_model with unwrap model
* changes to align advanced script with canonical script
* make changes for distributed training + unify unwrap_model calls in advanced script
* add module.dtype fix to dreambooth script
* unify unwrap_model calls in dreambooth script
* fix condition in validation run
* mixed precision
* Update examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* smol style change
* change autocast
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>