[X86] shouldReduceLoadWidth - don't split loads if we can freely reuse full width legal binop (#129695)
Currently shouldReduceLoadWidth is very relaxed about when loads can be
split to avoid extractions from the original full width load - resulting
in many cases where the number of memory operations notably increases,
replacing the cost of a extract_subvector for additional loads.
This patch adjusts the 256/512-bit vector load splitting metric to
detect cases where ANY use of the full width load is be used directly -
in which case we will now reuse that load for smaller types, unless we'd
need to extract an upper subvector / integer element - i.e. we now
correctly treat (extract_subvector cst, 0) as free.
We retain the existing logic of never splitting loads if all uses are
extract+stores but we improve this by peeking through bitcasts while
looking for extract_subvector/store chains.
This required a number of fixes - shouldReduceLoadWidth now needs to
peek through bitcasts UP the use-chain to find final users (limited to
hasOneUse cases to reduce complexity). It also exposed an issue in
isTargetCanonicalConstantNode which assumed that a load of vector
constant data would always extract, which is no longer the case.