xla
9d4dcae7 - [Pallas] Support Flash Attention (#6658)

Commit
1 year ago
[Pallas] Support Flash Attention (#6658) Summary: This PR makes all necessary changes to support Pallas's FlashAttention. The major change here is to disable TPU layout that places bigger dimensions on most minor layout locations. This optimization boosts performance for dim > 2 tensor input applications, like resnet. For LLM, it should be fine to disable since all inputs are 2D. ResNet: python test/test_train_mp_imagenet.py --fake_data --model=resnet50 --num_epochs=1 --metrics_debug XLA_TPU_LAYOUT = 0 | Training Device=xla:0/0 Epoch=1 Step=2320 Loss=0.00135 Rate=474.16 GlobalRate=255.14 Time=00:06:30 XLA_TPU_LAYOUT = 1 | Training Device=xla:0/1 Epoch=1 Step=2340 Loss=0.00135 Rate=1750.53 GlobalRate=1151.09 Time=00:15:46 Llama 2 2B XLA_TPU_LAYOUT = 0 Screenshot 2024-03-05 at 11 37 37 AM XLA_TPU_LAYOUT = 1 Screenshot 2024-03-05 at 4 48 33 PM Test Plan: python test/test_operations.py -v -k test_tpu_custom_call_pallas_flash_attention
Author
Parents
Loading