Flash attention recompute (#20603)
### Flash attn recompute
1. Allow PythonOp(FlashAttn) can be recomputed correctly.
https://github.com/microsoft/onnxruntime/pull/20603/commits/45879ff5c20bf4cc11840b38b1808572126c5368
2. Use JSON to pass the selected-to-recompute subgraphs.
https://github.com/microsoft/onnxruntime/pull/20603/commits/3c374da6788474cd09ba931eb0b00a45fa3f43e0
#### Better Memory Efficiency
Customer model can run both PyTorch SPDA and Flash Attn, this PR make it
possible to let the Flash Attn path work with ORTModule layerwise
recompute. The peak drop from 45.xGB to 32.xGB if we only compare the
layers (not including other pieces, BTW there are few more optimization
targeting other pieces as well later).
#### Better Perf
Using Flash ATTN bring additionally 16% end to end time reduction, with
highly aligned loss curve.

#### Use JSON File to pass Recompute Plans
To overcome the limitation of max length of the strings defined in
session options.
### Motivation and Context
<!-- - Why is this change required? What problem does it solve?
- If it fixes an open issue, please link to the issue here. -->