pytorch
14177f0d - [BE] Make `USE_FLASH_ATTENTION` private (#97579)

Commit
1 year ago
[BE] Make `USE_FLASH_ATTENTION` private (#97579) <!-- copilot:summary --> ### <samp>🤖 Generated by Copilot at b07152e</samp> This pull request refactors the CMake configuration to enable the `USE_FLASH_ATTENTION` feature for the `torch_cuda` target only, using a target-specific macro. This avoids conflicts with other libraries that also use this feature, such as fairseq. Pull Request resolved: https://github.com/pytorch/pytorch/pull/97579 Approved by: https://github.com/kit1980
Author
Committer
Parents
Loading