[QNN EP] MatMul+Add->Gemm fusion when AttentionFusion isn't enabled (#25017)
### Description
MatMul+Add->Gemm fusion when AttentionFusion isn't enabled.
### Motivation and Context
Graph transformation
[MatMulAddFusion](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/core/optimizer/matmul_add_fusion.cc)
fold `ONNX::MatMul` followed by `ONNX::Add` into `ONNX::GEMM`, however, it [intentionally skipping the portion belongs to "Attention Pattern"](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/core/optimizer/matmul_add_fusion.cc#L21).
This result in poor performance on QNN EP (and other EPs who does not run *AttentionFusion transformers) due to unfused MatMul + Add pairs.

With this change, additional GEMM would be fused *post*
AttentionFusions.