pytorch
78bdb858 - Call _sdp_attention in nn.functional.mha (#89470)

Commit
2 years ago
Call _sdp_attention in nn.functional.mha (#89470) # Summary Replaces the the inline block of code in nn.funcitonal.mha with `_scaled_dot_product_attention`. This function allows the fused kernels to be called if all the required input conditions are met. Pull Request resolved: https://github.com/pytorch/pytorch/pull/89470 Approved by: https://github.com/cpuhrsch, https://github.com/mikekgfb
Author
Committer
Parents
Loading