[Inductor] Add fused_attention pattern matcher with additional clone (#108141)
A previous PR https://github.com/pytorch/pytorch/pull/106274 decomposes `aten.dropout` and would create a `clone()` when `eval()` or `p=0`. This makes many SDPA-related models fail to match fused_attention pattern matchers.
This PR adds new fused_attention pattern matchers with an additional clone to re-enable the SDPA op matching.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108141
Approved by: https://github.com/jgong5, https://github.com/eellison