xla
6c3f2313 - Add heuristic default block sizes for different cases in ragged attention kernel (#8922)

Comment changes are shownComment changes are hidden
Commit
138 days ago
Add heuristic default block sizes for different cases in ragged attention kernel (#8922)
Author
Parents
  • test
    • File
      test_pallas.py
  • torch_xla/experimental
    • File
      custom_kernel.py
Loading