Improves parallelization by trees for TreeEnsemble (#13835)
### Description
If the number of trees is >= 100 and batch size >= 2000, the
parallelization by tree becomes slower than the parallelization by rows.
However, by applying the parallelization by trees over smaller chunks of
data, it is still better than the parallelization by rows. The following
script was used to measure the performance
[plot_gexternal_lightgbm_reg_per.zip](https://github.com/microsoft/onnxruntime/files/10149092/plot_gexternal_lightgbm_reg_per.zip)
with different thresholds. The graph were produced by the script
following the graph.
* //N means parallelization by rows
* //T means parallelization by trees
* //T-128 means parallelization by trees every batch of 128 rows.
* //T-1024 means parallelization by trees every batch of 1024 rows.
The following graphs shows that the parallelization by trees is better
than the parallelization by rows on small batches only. It is also
better to split the input tensor by chunks of 128 rows and parallelize
by trees on every chunk of 128 rows. The proposed changes implements
that optimization.
It applies the same idea even when there is only one thread. It also
makes sure one thread is used when the user only wants one.

```python
import pandas
import matplotlib.pyplot as plt
filenames = [
("//N",r"plot_gexternal_lightgbm_reg_per_N.csv"),
("//T", "plot_gexternal_lightgbm_reg_per_T.csv"),
("//T-128", "plot_gexternal_lightgbm_reg_per_128.csv"),
("//T-1024", "plot_gexternal_lightgbm_reg_per_1024.csv"),
]
dfs = []
for name, filename in filenames:
df = pandas.read_csv(filename)
for c in df.columns:
if "batch" in c:
df[f"-{name}-{c}"] = df[c]
dfs.append(df)
df = dfs[0][["N"]].copy()
for _df in dfs:
for c in _df.columns:
if c[0] == "-":
df[c] = _df[c].copy()
fig, ax = plt.subplots(1, 3, figsize=(14, 6))
Ts = [50, 500, 2000]
ga = df.set_index("N")
for i, nt in enumerate(Ts):
cs = [c for c in ga.columns if c.endswith(f"-{nt}")]
ga[cs].plot(ax=ax[i], title=f"Trees={nt}", logy=True, logx=True)
```
Below the performance gain for the monothread implementation by looping
on data in the inner loop.

### Motivation and Context
Performance.
Signed-off-by: xadupre <xadupre@microsoft.com>