[inductor] Fix a cpp wrapper codegen issue for _scaled_dot_product_efficient_attention (#102624)
Summary: This fixes a cpp_wrapper coverage drop on TIMM models as
shown in recent inference dashboard.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102624
Approved by: https://github.com/ngimel, https://github.com/jansel