SkipGroupNorm fusion and SDXL Pipeline Update (#18273)
Update a few optimizations for Stable Diffusion XL:
(1) Add SkipGroupNorm fusion
(2) Remvoe GroupNorm fusion limits. Previously, we only fuse GroupNorm
when channels is one of `320, 640, 960, 1280, 1920, 2560, 128, 256, 512`
so some GroupNorm in refiner was not fused.
(3) Tune SkipLayerNormalization to use vectorized kernel for hidden size
320, 640 and 1280.
Pipeline Improvements:
(4) Enable cuda graph for unetxl.
(5) Change optimization to generate optimized fp32 model with ORT, then
convert to fp16. Otherwise, fp16 model might be invalid.
(6) Add option to enable-vae-slicing.
Bug fixes:
(a) Fix vae decode in SD demo.
(b) Fix UnipPC add_noise missing a parameter.
(c) EulerA exception in SDXL demo. Disable it for now.
(d) Batch size > 4 has error in VAE without slicing. Force to enable vae
slicing when batch size > 4.
#### Performance Test on A100-SXM4-80GB
Description about the experiment in results:
*Baseline*: removed GroupNorm fusion limits; CUDA graph is enabled in
Clip and VAE, but not in Clip2 and UNet.
*UNetCG*: Enable Cuda Graph on UNet
*SLN*: Tune SkipLayerNormalization
*SGN*: Add SkipGroupNorm fusion
The latency (ms) of generating an image of size 1024x1024 with 30 steps
base model and 9 steps of refiner model:
| Baseline | UNetCG| UNetCG+SLN | UNetCG+SLN+SGN
-- | -- | -- | -- | --
Base Clip | 3.74 | 3.70 | 3.88 | 3.81
Base Unet x30 | 2567.73 | 2510.69 | 2505.09 | 2499.99
Refiner Clip | 7.59 | 7.42 | 7.41 | 7.58
Refiner Unet x 9 | 814.43 | 803.03 | 802.20 | 799.06
Refiner VAE Decoder | 84.62 | 85.18 | 85.24 | 87.43
E2E | 3480.56 | 3412.05 | 3405.77 | 3400.23
We can see that enable cuda graph brought major gain (around 68ms). SLN
Tuning has about 7ms gain. SkipGroupNorm fusion has 5ms gain.
SkipGroupNorm fusion won't reduce latency much, while it also has
benefit of reducing memory usage, so it is recommended to enable it.
### Motivation and Context
Additional optimizations upon previous work in
https://github.com/microsoft/onnxruntime/pull/17536.