[CPU] Fix SDPA node attention mask precision handling for bf16/f16 inference (#33132)
### Details:
- *Use actual attention mask input precision instead of compute
precision (bf16/f16) to fix LFM2-350M output corruption when running
with low precision on Xeon platforms.*
### Tickets:
- *CVS-177340*