onnxruntime
e8ba5145 - Add Transpose, Reshape, Pow and LeakyRelu ops to DNNL execution provider (#9180)

Commit
4 years ago
Add Transpose, Reshape, Pow and LeakyRelu ops to DNNL execution provider (#9180) * Transpose for DNNL EP Transpose reorders the memory to the right format but has the wrong dimentions and memory::format. So a new memory descriptor is created that points to the reordered memory. However, that memory is in a different location than the output expects. An extra parameter was added to the SetMemory to specify when memory must be copied if it is output from the subgraph. Signed-off-by: George Nash <george.nash@intel.com> * Implementation of Reshape op for dnnl ep Signed-off-by: George Nash <george.nash@intel.com> * Add Pow op to dnnl execution provider This Pow is limited; the exponent must be scaler or a one dimensional tensor e.g. a tensor with only a single element. The exponent must also be a constant initializer since it is only read when the primitive is created. OneDNN does not have any way to change the exponent after the primitive is created. The GraphViewer is now passed into the NodeCapability code since the GraphViewer is needed to find out if an input is a constant initializer. The unit tests for "Pow" did not make the exponent a constant initializer. To help verify the dnnl execution providers Pow function a version of the Pow unit tests was created for the DNNL execution provier that made the exponent a constant initializer. Signed-off-by: George Nash <george.nash@intel.com> * Add LeakyRelu to DNNL execution provider LeakyRelu was added to the dnnl elementwise ops. In the elementwise op the GetAlpha method was modified to take the default value for Alpha as a parameter instead of reading it from a member varable. This felt like it would be less likely to cause programer error. Signed-off-by: George Nash <george.nash@intel.com> * Switch dnnl_code_capability DataTypes from strings to enums Signed-off-by: George Nash <george.nash@intel.com> * Update DnnlSubgraphPrimitive.GetMemory function input This updates the GetMemory member function to take DnnlTensor instead of a string. This was done for two reasons. Every time the function was called it was always done using DnnlTensor.Name() this will reduce the code repition. We never called it using a saved string. This also makes the function inputs more closely match the GetMemoryAndReshape function. Making less differences between member functions. Signed-off-by: George Nash <george.nash@intel.com>
Author
Parents
Loading