Add fusion support for Dnnl execution provider (#9897)
* Op fusion support added
In addition the following op fusions are detected
- ConvRelu
- MatMulAdd
This change includes
- Change abstraction of Subgraph + node + tensor to support delete insert
modify
- add nodearg class to establish connection from tensor to node
- add graphtransformer class to support fusion
- add topological sort to ensure propoer node ordering after fusion
- add convrelu + matmuladd primitive to support execution of fused nodes
- Fix FusionResolution with missing tensors
when fusing, if the target node contains fewer tensors then original
patterns (Gelu and FastGelu ignores many initializers), potentially delete them
also from inputs and initializers
Also check tensor has no producer and consumer before deleting
Signed-off-by: Wang <zhaoyang.wang@intel.com>
* Gelu and FastGelu Fusion for DNNL EP
The basics of the Gelu/FastGelu code is modeled after:
- core/optimizer/fast_gelu_fusion.cc and
- core/optimizer/gelu_fusion.cc
OneDNN does not have support for 'Erf' unless it is part of 'Gelu'.
This results in detecting 'Gelu' fusion twice. Once when detecting
if the 'Erf' Operator is supported and again in the subgraph transformer
code. The capability code is finding the Gelu using onnxruntime:GraphViewer
and onnxruntime::Node. While the transformer code is using DnnlSubgraph
and DnnlNode. This results in two parts of code looking for the same
pattern but unfortanatly having little code reuse.
This also adds support for Biased versions of Gelu and FastGelu if they already
exist in a model.
Signed-off-by: George Nash <george.nash@intel.com>
* Code Clean Up
Signed-off-by: Wang <zhaoyang.wang@intel.com>
Co-authored-by: Wang <zhaoyang.wang@intel.com>