[CPU EP] Add blocked quantization to QuantizeLinear op kernel (#20977)
### Description
Add blocked quantization to QuantizeLinear op kernel.
If the quantize axis is not the last axis, block the tensor using 1x128
blocks. Blocks are dispatched to multiple threads for concurrently
processing. Currently only support scalar instructions.
If the quantize axis is the last axis, block the tensor using 1 x
quant_block_size blocks. Blocks are dispatched to multiple threads for
concurrent processing. If output type is int types, call mlas kernel to
use the SIMD instructions in each block.
#### Benchmark data
20 core 2GHz CPU, RelWithDebInfo config, 196 x 4096 tensor, quantize
float to int4x2
Quantize before last axis:
* single thread, scalar instruction: 31380900 ns
* 8 thread, scalar instruction: 5098620 ns
Quantize last axis:
* single thread, scalar instruction: 27927900 ns
* 8 thread, SIMD instruction: 102261 ns
more thread, SIMD instruction, larger block size helps
### Motivation and Context
ONNX added blocked quantization to QuantizeLinear in optset 21