oneDNN ep bf16 enabling (#13484)
### Description
This adds bfloat16 support to the oneDNN ep.
When using the oneDNN ep this enables bfloat16 support for the following
ops:
Exp, Sigmoid, Tanh, Relu, MatMul, Gelu, BiasGelu, Add, Sub,
Mul, Div, Div, Sqrt, Pow, ReduceMean, Abs, Cast, Equal, Exp,
FastGelu, FusedMatMul, Gemm, Greter, GreaterOrEqual, LeakyRelu,
Less, LessOrEqual, LRN, ReduceOps, Reshape, Squeeze, Transpose,
and Unsqueeze.
LayerNorm with some internal casting.
BatchNorm only enabled BFloat16 for input and output, scale and bias
still need fp32 input.
Added bfloat16 unit tests for all of the operators in question. When
possible we reused the already existing unit tests that were added by
CUDA and ROCM eps.
In many of the unit tests an unusual pattern will be seen
#if defined(USE_DNNL)
TEST(Test, bfloat16_test) {
#if defined(USE_DNNL)
// oneDNN ep specific code
#endif
//test code
}
#endif
Although it looks unusual this was purposely done if another ep
implements bfloat16 support for that operator they will be able to
enable the unit test by adding there execution provider to the first
line without needing to edit inside the test.
Example: `#if defined(USE_CUDA) || defined(USE_DNNL)` see the
MatMul_float16 test in matmul_test.cc for and example of how this is
useful.
Additionally two new ISA checks (AVX512_BF16 and AMX-BF16) were added to
the cpuid_info code in. This was important to detecting is bfloat16
operations are supported by the CPU.
### Motivation and Context
This expands the capabilities of the oneDNN execution provider to
support models containing bfloat16 operations.
Signed-off-by: George Nash <george.nash@intel.com>
Signed-off-by: Ruihan-Yin <ruihan.yin@intel.com>