[SD3 dreambooth-lora training] small updates + bug fixes (#9682)
* add latent caching + smol updates
* update license
* replace with free_memory
* add --upcast_before_saving to allow saving transformer weights in lower precision
* fix models to accumulate
* fix mixed precision issue as proposed in https://github.com/huggingface/diffusers/pull/9565
* smol update to readme
* style
* fix caching latents
* style
* add tests for latent caching
* style
* fix latent caching
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>