The documentation is not available anymore as the PR was closed or merged.
Great! Could we maybe also update the docs:
https://github.com/huggingface/diffusers/blob/main/docs/source/en/training/lora.mdx
and tests:
https://github.com/huggingface/diffusers/blob/main/examples/test_examples.py (think we've never added a lora test - my bad I added the dreambooth lora training script 😅 )
@patrickvonplaten ready for a review now. I have addressed your comments.
The tests were run using the command below:
python -m pytest -n 2 --max-worker-restart=0 --dist=loadfile -s -v -k "lora" examples/test_examples.py
176 | 178 | ``` | |
177 | 179 | ||
180 | It's also possible to additionally fine-tune the text encoder with LoRA. This, in most cases, leads | ||
181 | to better results with a slight increase in the compute. To allow fine-tuning the text encoder with LoRA, |
Would it be possible to add more experimental details of unet_lora
and text_encoder_lora
(and combined) in separate blogs (like dreambooth)?
We are leaving it to the community for now. But we will update if we find more :)
Great job! Good to merge for me - we'll just need to resolve the merge conflicts
I think you broke something... My Lora + dreambooth was working fine, and now I am unable to load the LoRA weights back:
pipeline.unet.load_attn_procs(lora_model_path)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Cell In[4], line 1
----> 1 pipeline.unet.load_attn_procs(lora_model_path)
2 pipeline.to("cuda")
File ~/Apps/diffusers/src/diffusers/loaders.py:279, in UNet2DConditionLoadersMixin.load_attn_procs(self, pretrained_model_name_or_path_or_dict, **kwargs)
276 attn_processors = {k: v.to(device=self.device, dtype=self.dtype) for k, v in attn_processors.items()}
278 # set layers
--> 279 self.set_attn_processor(attn_processors)
File ~/Apps/diffusers/src/diffusers/models/unet_2d_condition.py:513, in UNet2DConditionModel.set_attn_processor(self, processor)
510 fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor)
512 for name, module in self.named_children():
--> 513 fn_recursive_attn_processor(name, module, processor)
File ~/Apps/diffusers/src/diffusers/models/unet_2d_condition.py:510, in UNet2DConditionModel.set_attn_processor.<locals>.fn_recursive_attn_processor(name, module, processor)
507 module.set_processor(processor.pop(f"{name}.processor"))
509 for sub_name, child in module.named_children():
--> 510 fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor)
File ~/Apps/diffusers/src/diffusers/models/unet_2d_condition.py:510, in UNet2DConditionModel.set_attn_processor.<locals>.fn_recursive_attn_processor(name, module, processor)
507 module.set_processor(processor.pop(f"{name}.processor"))
509 for sub_name, child in module.named_children():
--> 510 fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor)
[... skipping similar frames: UNet2DConditionModel.set_attn_processor.<locals>.fn_recursive_attn_processor at line 510 (3 times)]
File ~/Apps/diffusers/src/diffusers/models/unet_2d_condition.py:510, in UNet2DConditionModel.set_attn_processor.<locals>.fn_recursive_attn_processor(name, module, processor)
507 module.set_processor(processor.pop(f"{name}.processor"))
509 for sub_name, child in module.named_children():
--> 510 fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor)
File ~/Apps/diffusers/src/diffusers/models/unet_2d_condition.py:507, in UNet2DConditionModel.set_attn_processor.<locals>.fn_recursive_attn_processor(name, module, processor)
505 module.set_processor(processor)
506 else:
--> 507 module.set_processor(processor.pop(f"{name}.processor"))
509 for sub_name, child in module.named_children():
510 fn_recursive_attn_processor(f"{name}.{sub_name}", child, processor)
KeyError: 'down_blocks.0.attentions.0.transformer_blocks.0.attn1.processor'
Reverting to previous commits fix the issue
It's better if you create a new issue thread for this problems you are facing with a reproducible snippet.
Yeah sorry, but I just pulled and integrated that commit and it broke, and realised that you just had merged that.
Login to write a write a comment.
Following up on #2918. Closes #3065
Hyperparameter tuning might change the game but with the default hyperparameters (taken from the LoRA training section of DreamBooth), it seems to be doing okay.
Report: https://wandb.ai/sayakpaul/dreambooth-lora/reports/test-23-04-17-17-00-13---Vmlldzo0MDkwNjMy
Model repo (comes with a card if someone forgot): https://huggingface.co/sayakpaul/dreambooth
Sharing this cutie which was generated:
