onnxruntime
03c6c2e2 - [QNN] MatMulAddFusion and Reshape Related Fusion (#22494)

Commit
345 days ago
[QNN] MatMulAddFusion and Reshape Related Fusion (#22494) QNN EP relies on Gemm Op to use FullyConnected QNN Op to run the model, which is much faster than MatMul+Add. This PR fuses MatMul+Add when MatMul's 2nd input is 2D initializer, no matter the rank of the 1st input. If the 1st input is not 2D tensor, Reshape nodes will be added. On QNN EP, the memory allocation is for each activation tensor, so Reshape/Squeeze/Unsqueeze is not no-op. This PR also add some fusion trying to remove redundant reshape nodes. For some QNN AI Hub models on specific device, without removing the Reshape nodes, it cannot finalize the graph when execution, but works well after removing. Run below models with and without the change: swin_tiny: Average inference time cost: 12.8077 ms | Average inference time cost: 23.956 ms swin_base: Average inference time cost: 27.0639 ms | Average inference time cost: 57.6608 ms convnext_tiny: Average inference time cost: 3.42956 ms | Average inference time cost: 16.1848 ms openai_clip_CLIPTextEncoder: Average inference time cost: 5.96104 ms | Average inference time cost: 220.406 ms openai_clip_CLIPImageEncoder: Average inference time cost: 41.8206 ms | Average inference time cost: 919.712 ms NOTE that current change skips the Attention pattern because it not it will cause AttentionFusion to work. Ideally we need to adjust the AttentionFusion to support the Gemm pattern, but it requires big changes. Maybe we can do this in the future, say, when we want to run transformer models on QNN, since we don't have Attention QNN, we still want to fuse MatMul+Add in the Attention pattern to use FullyConnected in QNN side. --------- Co-authored-by: adrianlizarraga <adlizarraga@microsoft.com>
Author
Parents
Loading