Fix cumprod f16 opinfo test via ref-in-float + increasing tolerances (#109128)
Without setting `reference_in_float`, cumprod's single sample case
passes (i.e. the compiled f16 result matches the eager mode f16 result;
in fact they are identical because they both call into aten). However,
the grad calculation does not line up.
Turning on `reference_in_float` causes the grad check to pass (i.e. we
are closer to the more accurate f64 grad calculation) but causes the
single sample case to fail. Since the compiled f16 case is no less
accurate than the eager f16 case for the single sample, relaxing the
tolerances here seems fine.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109128
Approved by: https://github.com/eellison
ghstack dependencies: #109081, #109089