[inductor] Add scaled_dot_product_attention to fallback kernels (#93339)
Summary:
We don't have decomps/lowerings for SDPA (and probably won't for a
while) so don't warn.
Test Plan: code inspection
Differential Revision: D42878203
Pull Request resolved: https://github.com/pytorch/pytorch/pull/93339
Approved by: https://github.com/desertfire, https://github.com/drisspg