The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
Can you provide a comment to reproduce the issue?
@sayakpaul just follow lora training
git clone https://github.com/huggingface/diffusers
cd diffusers
pip install .
cd examples/text_to_image
pip install -r requirements.txt
accelerate config default
export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export OUTPUT_DIR="/sddata/finetune/lora/pokemon"
export HUB_MODEL_ID="pokemon-lora"
export DATASET_NAME="lambdalabs/pokemon-blip-captions"
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME \
--dataloader_num_workers=8 \
--resolution=512
--center_crop \
--random_flip \
--train_batch_size=1 \
--gradient_accumulation_steps=4 \
--max_train_steps=15000 \
--learning_rate=1e-04 \
--max_grad_norm=1 \
--lr_scheduler="cosine" \
--lr_warmup_steps=0 \
--output_dir=${OUTPUT_DIR} \
--push_to_hub \
--hub_model_id=${HUB_MODEL_ID} \
--report_to=wandb \
--checkpointing_steps=500 \
--validation_prompt="A pokemon with blue eyes." \
--seed=1337
will have error "ValueError: Attempting to unscale FP16 gradients occurred."
which can find fix in #6080
After fix this issue #6080
will occur RuntimeError: Input type (float) and bias type (c10::Half) should be the same
this error mentioned in #6086 (comment)
which is the same as #4796
after apply the change, works fine.
I think this can be solved by running the inference under an autocast block. Could you instead try that?
Here is an example:
@sayakpaul problem can be solved by using autocast
block Thanks.
with torch.cuda.amp.autocast():
Yup. Feel free to update the PR accordingly then :)
Login to write a write a comment.
When run train_text_to_image_lora.py , RuntimeError: Input type (float) and bias type (c10::Half) should be the same
occur
this is the same issues as #4796
apply the update to pipeline_stable_diffusion.py too
Fixes # (issue)
RuntimeError: Input type (float) and bias type (c10::Half) should be the same