[QNN EP] Adjust tolerance for Clip and Transpose tests due to FP16 default in QNN HTP (#26499)
### Description
This PR updates the tolerance thresholds for the Clip and Transpose
tests in QnnHTPBackendTests. The adjustment accounts for minor accuracy
differences introduced by the change in default floating-point precision
in QNN HTP starting from version 2.35.
### Motivation and Context
Since QNN 2.35, the default floating-point precision in QNN HTP has
changed from FP32 to FP16. Additionally, the configuration option
`QNN_HTP_GRAPH_CONFIG_OPTION_PRECISION` has been deprecated.
This change in precision can lead to expected accuracy loss, especially
in scenarios where graph inputs and outputs are defined as float 32, but
internal computations are performed in FP16 (e.g., FP32 → FP16 → FP32
conversions). To accommodate this, the tolerance thresholds for the
affected tests have been increased to prevent false negatives due to
precision differences.
@microsoft-github-policy-service agree company="Qualcomm"