The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.
Thanks for your PR. Could you also provide an example of how to use the script?
I didn't change the script usage.
For SDXL
python convert_diffusers_to_original_stable_diffusion.py --model_path=/path/to/stable-diffusion-xl-base-1.0/
--checkpoint_path=./sdxl.ckpt
For SD V2.1
python convert_diffusers_to_original_stable_diffusion.py --model_path=/path/to/stable-diffusion-2-1/
--checkpoint_path=./sd21.ckpt
I've tested this script on SDXL and SD21 and the converted weights are loaded properly by SD Webui.
Thanks! @DN6 could you give this a look too?
Conversion done, loaded without problems, but producing 'NaN' error at interference:
'A tensor with all NaNs was produced in Unet'.
Could you tell me how do you run interference?
Could you tell me how do you run interference?
In 'AUTOMATIC1111/stable-diffusion-webui', as usual base sdxl model.
This may be a bug in webui: AUTOMATIC1111/stable-diffusion-webui#12561
This may be a bug in webui: AUTOMATIC1111/stable-diffusion-webui#12561
hi, I'm the writer of this bug report, have you met the problem either? and have you solved it? thx.
@realliujiaxu Hi, thanks for your PR, that's awesome! Besides converting diffusers to SDXL, could you please help for converting SDXL to diffusers?
@realliujiaxu Hi, thanks for your PR, that's awesome! Besides converting diffusers to SDXL, could you please help for converting SDXL to diffusers?
I believe that is already implemented in the main branch, check https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py.
This may be a bug in webui: AUTOMATIC1111/stable-diffusion-webui#12561
hi, I'm the writer of this bug report, have you met the problem either? and have you solved it? thx.
Sorry, I haven't had that problem. But I did encounter a situation where the generated image was all black and colored noise, switching between v1.5 model and sdxl might fix it.
@realliujiaxu Hi, thanks for your PR, that's awesome! Besides converting diffusers to SDXL, could you please help for converting SDXL to diffusers?
I believe that is already implemented in the main branch, check https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py.
I see, really, thank you very much!
@realliujiaxu SDXL is supported natively in Diffusers.
https://huggingface.co/docs/diffusers/v0.20.0/en/api/pipelines/stable_diffusion/stable_diffusion_xl#stable-diffusion-xl
Unless you're trying to convert a different type of SDXL checkpoint to Diffusers?
@realliujiaxu SDXL is supported natively in Diffusers. https://huggingface.co/docs/diffusers/v0.20.0/en/api/pipelines/stable_diffusion/stable_diffusion_xl#stable-diffusion-xl
Unless you're trying to convert a different type of SDXL checkpoint to Diffusers?
This script is used to convert diffusers to SDXL checkpoint as proposed by #4494.
I think there is a real need to convert the model fine-tuned by diffusers to the original SDXL checkpoint, which can be loaded into a WebUI for example.
@realliujiaxu Thanks for putting this together. I think the changes looks good, however I think it would be easier to maintain if it is a separate script e.g convert_diffusers_to_original_sdxl.py
rather than as a flag passed to the existing stable diffusion script.
@realliujiaxu Thanks for putting this together. I think the changes looks good, however I think it would be easier to maintain if it is a separate script e.g
convert_diffusers_to_original_sdxl.py
rather than as a flag passed to the existing stable diffusion script.
Thanks for you advice. I've updated it as a separate script.
@DN6 Any other suggestions?
Thank you, works good. Here is sdxl 1.0 fp32 (fixed vae), if someone need it - https://huggingface.co/alessandro893/sdxl_base_1.0_FP32
LGTM 👍🏽
Thank you, works good. Here is sdxl 1.0 fp32 (fixed vae), if someone need it - https://huggingface.co/alessandro893/sdxl_base_1.0_FP32
Hi. Thanks alessandro893. Your file is working for me. Вut other sdxl models give an error NansException: A tensor with all NaNs was produced in Unet. Use --disable-nan-check commandline argument to disable this check. Video card 3090. But
the sd_xl_base_1.0refiner.safetensors file still gives the Nan error, as well as other sdxl models from civitai. The --no-half startup argument doesn't help, and --disable-non-check gives a black square. I thought that the launch convert_diffusers_to_original_sdxl.py solve the problem, maybe I'm wrong? And it doesn't work out.
File "f:\SD\venv\Lib\site-packages\torch\convert_diffusers_to_original_sdxl.py", line 5, in
import argparse
File "f:\SD\venv\Lib\site-packages\torch\argparse.py", line 89, in
import re as _re
File "f:\sd\python\lib\re.py", line 124, in
import enum
File "f:\sd\python\lib\enum.py", line 2, in
from types import MappingProxyType, DynamicClassAttribute
File "f:\SD\venv\Lib\site-packages\torch\types.py", line 1, in
import torch
ModuleNotFoundError: No module named 'torch'
Tell me how to run the script, my knowledge is not enough,
I converted multiple sdxl diffusers models to safetensors. but they are not working when I am loading them. Can you please check
ValueError Traceback (most recent call last)
/home/example_notebook.ipynb Cell 3 line 1
----> 1 pipe = StableDiffusionXLPipeline.from_single_file(model_path,local_files_only = True)
File ~/diff_test_env/lib/python3.8/site-packages/diffusers/loaders.py:2308, in FromSingleFileMixin.from_single_file(cls, pretrained_model_link_or_path, **kwargs)
2294 file_path = file_path[len("main/") :]
2296 pretrained_model_link_or_path = hf_hub_download(
2297 repo_id,
2298 filename=file_path,
(...)
2305 force_download=force_download,
2306 )
-> 2308 pipe = download_from_original_stable_diffusion_ckpt(
2309 pretrained_model_link_or_path,
2310 pipeline_class=cls,
2311 model_type=model_type,
2312 stable_unclip=stable_unclip,
2313 controlnet=controlnet,
2314 from_safetensors=from_safetensors,
2315 extract_ema=extract_ema,
2316 image_size=image_size,
2317 scheduler_type=scheduler_type,
2318 num_in_channels=num_in_channels,
2319 upcast_attention=upcast_attention,
2320 load_safety_checker=load_safety_checker,
2321 prediction_type=prediction_type,
2322 text_encoder=text_encoder,
2323 vae=vae,
2324 tokenizer=tokenizer,
2325 original_config_file=original_config_file,
2326 config_files=config_files,
2327 )
2329 if torch_dtype is not None:
2330 pipe.to(torch_dtype=torch_dtype)
File ~/diff_test_env/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py:1605, in download_from_original_stable_diffusion_ckpt(checkpoint_path_or_dict, original_config_file, image_size, prediction_type, model_type, extract_ema, scheduler_type, num_in_channels, upcast_attention, device, from_safetensors, stable_unclip, stable_unclip_prior, clip_stats_path, controlnet, load_safety_checker, pipeline_class, local_files_only, vae_path, vae, text_encoder, tokenizer, config_files)
1603 config_name = "laion/CLIP-ViT-bigG-14-laion2B-39B-b160k"
1604 config_kwargs = {"projection_dim": 1280}
-> 1605 text_encoder_2 = convert_open_clip_checkpoint(
1606 checkpoint, config_name, prefix="conditioner.embedders.1.model.", has_projection=True, **config_kwargs
1607 )
1609 if is_accelerate_available(): # SBM Now move model to cpu.
1610 if model_type in ["SDXL", "SDXL-Refiner"]:
File ~/diff_test_env/lib/python3.8/site-packages/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py:971, in convert_open_clip_checkpoint(checkpoint, config_name, prefix, has_projection, local_files_only, **config_kwargs)
969 if is_accelerate_available():
970 for param_name, param in text_model_dict.items():
--> 971 set_module_tensor_to_device(text_model, param_name, "cpu", value=param)
972 else:
973 if not (hasattr(text_model, "embeddings") and hasattr(text_model.embeddings.position_ids)):
File ~/diff_test_env/lib/python3.8/site-packages/accelerate/utils/modeling.py:285, in set_module_tensor_to_device(module, tensor_name, device, value, dtype, fp16_statistics)
283 if value is not None:
284 if old_value.shape != value.shape:
--> 285 raise ValueError(
286 f'Trying to set a tensor of shape {value.shape} in "{tensor_name}" (which has shape {old_value.shape}), this look incorrect.'
287 )
289 if dtype is None:
290 # For compatibility with PyTorch load_state_dict which converts state dict dtype to existing dtype in model
291 value = value.to(old_value.dtype)
ValueError: Trying to set a tensor of shape torch.Size([1024]) in "bias" (which has shape torch.Size([1280])), this look incorrect.
Login to write a write a comment.
add convert diffuser pipeline of XL to original stable diffusion
Fixes # (4494)