onnxruntime
8a98874e - Flash attention recompute (#20603)

Commit
1 year ago
Flash attention recompute (#20603) ### Flash attn recompute 1. Allow PythonOp(FlashAttn) can be recomputed correctly. https://github.com/microsoft/onnxruntime/pull/20603/commits/45879ff5c20bf4cc11840b38b1808572126c5368 2. Use JSON to pass the selected-to-recompute subgraphs. https://github.com/microsoft/onnxruntime/pull/20603/commits/3c374da6788474cd09ba931eb0b00a45fa3f43e0 #### Better Memory Efficiency Customer model can run both PyTorch SPDA and Flash Attn, this PR make it possible to let the Flash Attn path work with ORTModule layerwise recompute. The peak drop from 45.xGB to 32.xGB if we only compare the layers (not including other pieces, BTW there are few more optimization targeting other pieces as well later). #### Better Perf Using Flash ATTN bring additionally 16% end to end time reduction, with highly aligned loss curve. ![image](https://github.com/microsoft/onnxruntime/assets/10530022/bb63894a-f281-49bc-a8e6-ff818439be38) #### Use JSON File to pass Recompute Plans To overcome the limitation of max length of the strings defined in session options. ### Motivation and Context <!-- - Why is this change required? What problem does it solve? - If it fixes an open issue, please link to the issue here. -->
Author
Parents
Loading