onnxruntime
a70ac2f6 - Sync optimizer opset versions with CPU kernel registrations (#27270)

Commit
2 days ago
Sync optimizer opset versions with CPU kernel registrations (#27270) This pull request expands support for several ONNX operator fusion and optimization passes to include newer operator set (opset) versions, especially opset 22 and above. This ensures that the optimizer can handle models using the latest ONNX specifications, improving compatibility and optimization coverage. The most important changes are: **Expanded opset version support for key operators:** * Updated checks in various fusion and optimization passes (e.g., Conv, Dropout, HardSigmoid, Transpose, Cast, IsInf, Reshape) to include newer opset versions such as 19, 20, 21, 22, 23, 24, and 25, where applicable. This affects fusion passes like Conv-Add, Conv-BN, Conv-Mul, Conv-Activation, BiasDropout, FastGelu, LayerNorm, Dropout elimination, GemmTranspose, IsInfReduceSum, and more. [[1]](diffhunk://#diff-e6b85539fe82c4b67655ba0f67140ad12548665d5633f820703e7c9cff0a87e2L110-R110) [[2]](diffhunk://#diff-e6b85539fe82c4b67655ba0f67140ad12548665d5633f820703e7c9cff0a87e2L215-R215) [[3]](diffhunk://#diff-bfc700888e3cbcd2d43e1c2052e233d870ce13066e0ae61abf101d0a57182067L116-R116) [[4]](diffhunk://#diff-bfc700888e3cbcd2d43e1c2052e233d870ce13066e0ae61abf101d0a57182067L291-R291) [[5]](diffhunk://#diff-f12a56f0905dde60f5a119d46d792427b80597f4ee9b23838b8243144eed6f15L110-R110) [[6]](diffhunk://#diff-7cc071945ac2e0ec69c88feb8aa9ef2b3bdc598f1875b95e0ec40f107663e04aL148-R148) [[7]](diffhunk://#diff-73d91ca393d86a172d72ffb5cd942856088431b6922dcbb8fe3d79c6b3e4aabcL116-R116) [[8]](diffhunk://#diff-b59c696765e4d86fa896861fcc14e367f14a905afe263edc8cfb3a721fad15cfL147-R147) [[9]](diffhunk://#diff-97696a1ea660259af1c02da793abf7a807de115421a0ec32f1e36f39371e4e16L1450-R1450) [[10]](diffhunk://#diff-c67868743016bc3c4c59c5e635a9b9d2cc2aa47782c1a92fd5c5f4c8e1316f35L228-R228) [[11]](diffhunk://#diff-8c1e1a2c2eb3a0bab61f6ada50f41e78cb1883f0d98863dc44032118c522692aL25-R25) [[12]](diffhunk://#diff-8c1e1a2c2eb3a0bab61f6ada50f41e78cb1883f0d98863dc44032118c522692aL35-R35) [[13]](diffhunk://#diff-8f18e5c2ad33a6cc11a340f2c0ff3ce5ad63beed1dcc31ea49f1ff409ef030c9L154-R154) [[14]](diffhunk://#diff-8f18e5c2ad33a6cc11a340f2c0ff3ce5ad63beed1dcc31ea49f1ff409ef030c9L265-R265) [[15]](diffhunk://#diff-c498b383c5a49acb55f6f5773bec80e1009dd8dc7e0b1a6466fb6f74e48b62e3L24-R37) [[16]](diffhunk://#diff-d44581bb97c9b01e080afb16003a02d942195c1a24d2c3083f840f9336cff30aL107-R107) [[17]](diffhunk://#diff-d44581bb97c9b01e080afb16003a02d942195c1a24d2c3083f840f9336cff30aL131-R131) [[18]](diffhunk://#diff-19915a256906a5201ed930dcb88342992a70a19bb87e943979d7fb16b4cf039fL36-R36) [[19]](diffhunk://#diff-19915a256906a5201ed930dcb88342992a70a19bb87e943979d7fb16b4cf039fL48-R48) [[20]](diffhunk://#diff-19915a256906a5201ed930dcb88342992a70a19bb87e943979d7fb16b4cf039fL68-R68) [[21]](diffhunk://#diff-001dd393a46dcdf405fb2cf91e6b03a56111175b5106b5eea94219995d2e75b1L244-R244) [[22]](diffhunk://#diff-001dd393a46dcdf405fb2cf91e6b03a56111175b5106b5eea94219995d2e75b1L257-R257) [[23]](diffhunk://#diff-001dd393a46dcdf405fb2cf91e6b03a56111175b5106b5eea94219995d2e75b1L356-R356) [[24]](diffhunk://#diff-001dd393a46dcdf405fb2cf91e6b03a56111175b5106b5eea94219995d2e75b1L374-R374) [[25]](diffhunk://#diff-001dd393a46dcdf405fb2cf91e6b03a56111175b5106b5eea94219995d2e75b1L640-R640) **Fusion rule and selector registration updates:** * Modified fusion rule registration to account for the new opset versions, ensuring that pattern matchers and selectors for fusions like Conv+Activation and Conv+Add+Activation are registered for the expanded opset ranges. [[1]](diffhunk://#diff-e6b85539fe82c4b67655ba0f67140ad12548665d5633f820703e7c9cff0a87e2L215-R215) [[2]](diffhunk://#diff-bfc700888e3cbcd2d43e1c2052e233d870ce13066e0ae61abf101d0a57182067L291-R291) **Operator-specific compatibility logic:** * Enhanced logic for supported activation functions and other ops to include additional opset versions, such as Elu, HardSigmoid, LeakyRelu, Selu, Softplus, Softsign, ThresholdedRelu, and others, improving the range of fusable patterns. These changes collectively improve the optimizer's ability to process and optimize models using the latest ONNX operator versions, making the system more robust and future-proof. --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: titaiwangms <18010845+titaiwangms@users.noreply.github.com> Co-authored-by: Ti-Tai Wang <titaiwang@microsoft.com>
Author
Parents
Loading