[ty] lower `MAX_RECURSIVE_UNION_LITERALS` (#23521)
## Summary
This is a simpler approach to the performance issues mentioned in
#23520.
isort's profiling results suggested that #22794 added a slow-converging
calculation rather than adding a specific hotspot to the type inferer.
What makes type inference for loop variables more troublesome than other
types of type inference is the calculation of reachability.
For other types, such as implicit attribute type inference, reachability
analysis is not performed for each attribute binding (reverted due to
performance issues: https://github.com/astral-sh/ruff/pull/20128,
https://github.com/astral-sh/ty/issues/2117). They are all treated as
reachable.
Loop variables perform this heavy calculation (omitting reachability
analysis from the `LoopHeader` branch of `infer_loop_header_definition`
and `place_from_bindings_impl` will significantly improve performance).
It appears that slow convergence for one variable in a loop block will
also slow down the inference of all other definitions in the block that
depend on it.
To alleviate the issue, this PR reduces the value of
`MAX_RECURSIVE_UNION_LITERALS` from 10 to 5. This will result in faster
convergence when the loop variable grows like `Literal[0, 1, ...]`.
Local measurements show that this PR alone improved the isort inspection
time by about 37%.
I chose 5 as the new value because I felt it offered a good balance
between type inference precision and performance, based on the following
measurement results:
| Threshold | Mean | vs 10 |
|-----------|--------|---------|
| 4 | 0.89s | -38% |
| 5 | 0.91s | -37% |
| 6 | 1.05s | -27% |
| 7 | 1.06s | -27% |
| 8 | 1.23s | -14% |
| 9 | 1.26s | -13% |
| 10 (current) | 1.43s | — |
It has been observed that this PR and another mitigation, #23520, are
compatible, resulting in a total performance recovery of about 40-50%.
## Test Plan
N/A