xla
6c3f2313
- Add heuristic default block sizes for different cases in ragged attention kernel (#8922)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
138 days ago
Add heuristic default block sizes for different cases in ragged attention kernel (#8922)
References
#8922 - Add heuristic default block sizes for different cases in ragged attention kernel
Author
yaochengji
Parents
f1fe8bc0
Files
2
test
test_pallas.py
torch_xla/experimental
custom_kernel.py
Loading