onnxruntime
af80542e - Update optimize_pipeline for SDXL (#17536)

Commit
2 years ago
Update optimize_pipeline for SDXL (#17536) - [x] Optimize SDXL models exported by optimum. - [x] Enable it to run locally instead of using module. - [x] Detect external data file in original model, and save with same format by default. - [x] Add tests ### Example ``` pip install optimum transformers diffusers onnx onnxruntime-gpu>=1.16 optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl ./sd_xl_base_onnx python -m onnxruntime.transformers.models.stable_diffusion.optimize_pipeline -i ./sd_xl_base_onnx -o ./sd_xl_base_fp16 --float16 ``` ### Known issues (1) VAE decoder cannot be converted to float16. Otherwise, there will be black image in output. (2) To use the float16 models, need a minor change in optimum to convert the inputs for VAE decoder from float16 to float32 since we keep VAE decoder as float32. The change is to append a line like the following after [this line](https://github.com/huggingface/optimum/blob/afd2b5a36663bebd1f501486acee065c728947bc/optimum/pipelines/diffusers/pipeline_stable_diffusion_xl.py#L483) ``` latents = latents.astype(np.float32) ```
Author
Parents
Loading