[Flux] Dreambooth LoRA training scripts (#9086)
* initial commit - dreambooth for flux
* update transformer to be FluxTransformer2DModel
* update training loop and validation inference
* fix sd3->flux docs
* add guidance handling, not sure if it makes sense(?)
* inital dreambooth lora commit
* fix text_ids in compute_text_embeddings
* fix imports of static methods
* fix pipeline loading in readme, remove auto1111 docs for now
* fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs
* Update examples/dreambooth/train_dreambooth_flux.py
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com>
* fix te2 loading and remove te2 refs from text encoder training
* fix tokenizer_2 initialization
* remove text_encoder training refs from lora script (for now)
* try with vae in bfloat16, fix model hook save
* fix tokenization
* fix static imports
* fix CLIP import
* remove text_encoder training refs (for now) from lora script
* fix minor bug in encode_prompt, add guidance def in lora script, ...
* fix unpack_latents args
* fix license in readme
* add "none" to weighting_scheme options for uniform sampling
* style
* adapt model saving - remove text encoder refs
* adapt model loading - remove text encoder refs
* initial commit for readme
* Update examples/dreambooth/train_dreambooth_lora_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update examples/dreambooth/train_dreambooth_lora_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix vae casting
* remove precondition_outputs
* readme
* readme
* style
* readme
* readme
* update weighting scheme default & docs
* style
* add text_encoder training to lora script, change vae_scale_factor value in both
* style
* text encoder training fixes
* style
* update readme
* minor fixes
* fix te params
* fix te params
---------
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>