Integration with ONNX rel-1.18.0 (#24449)
### Description
<!-- Describe your changes. -->
The PR adds CPU support by following release logics in
https://github.com/onnx/onnx/wiki/Logistics-for-ONNX-Release-1.18.0. The
goal is to do the minimal changes needed to ensure ONNXRUNTIME works
fine with ONNX 1.18.0
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->
Essentially, incoming ONNX 1.18.0 provides the following
(1) Introduce opset 23 (included in this PR)
(2) Support Attention, RMSNormalization, and RotaryEmbedding (**NOT**
included in this PR)
(3) Support float4e2m1 (**NOT** included in this PR)
### Remaining Issues
1. onnx.patch
* ONNXRUNTIME is using static functions (shape inference) from ONNX
(https://github.com/microsoft/onnxruntime/issues/24558)
* GroupNormalization-18 is deprecated because its spec was wrong
(https://github.com/microsoft/onnxruntime/issues/24560)
* Contrib op registration api from ONNX: OpSchemaRegisterOnce is changed
to explicit, and ONNXRUNTIME was leveraging it to do fluent-chaining
style. (https://github.com/microsoft/onnxruntime/issues/24561)
2. Support float4e2m1
(https://github.com/microsoft/onnxruntime/issues/24553)
3. Support
Attention(https://github.com/microsoft/onnxruntime/issues/24554),
RMSNormalization(https://github.com/microsoft/onnxruntime/issues/24555),
and
RotaryEmbedding(https://github.com/microsoft/onnxruntime/issues/24556)
4. Disable QNN tests