[CUDA] Improve performance of DecoderMaskedMultiheadAttention on A100 (#18695)
### Description
Currently there are 2 memory latency bound hotspots in the
DecoderMaskedMultiheadAttention kernel in terms of reading from global
memory - one reading K values and the other reading V values.
The current logic to read them both is something like this -
for(int i=0; i<all_time_steps; ++i) {
auto data_in_register = load_chunk_from_global_memory(i);
do_compute(data_in_register);
}
This incurs a data read stall as data needs to be fetched into the
registers before compute can begin and the compute instruction incurs a
data read stall and this also does not fully utilize the memory
bandwidth of A100. The above logic can be re-written by doing some
manual loop unrolling so that more data read is triggered "in flight".
Unroll factor: 4
for(int i=0; i<all_time_steps; i+=4) {
auto data_in_register_0 = load_chunk_from_global_memory(i);
// Do bounds check for the following
auto data_in_registers_1 = load_chunk_from_global_memory(i+1);
auto data_in_register_2 = load_chunk_from_global_memory(i+2);
auto data_in_register_3 = load_chunk_from_global_memory(i+3);
do_compute(data_in_register_0);
// Do bounds check for the following
do_compute(data_in_register_1);
do_compute(data_in_register_2);
do_compute(data_in_register_3);
}
The idea is that the memory read latency is hidden by instructions being
issued for subsequent data reads. See here for more details -
https://forums.developer.nvidia.com/t/global-memory-access-synchronous-or-asynchronous-read-write/3256/4
Kernel clock cycles, latency, and memory bandwidth usage before:
<img width="1210" alt="image"
src="https://github.com/microsoft/onnxruntime/assets/9969784/7a1f41f9-fdaa-47b3-b629-996d7b5eef17">
Kernel clock cycles, latency, and memory bandwidth usage after:
<img width="1205" alt="image"
src="https://github.com/microsoft/onnxruntime/assets/9969784/c76b2d2f-43e3-43c9-a710-b5fae76f69b6">
As can be seen, the kernel latency is better by >30% and memory
throughput is better by >14%.
We have a 1P customer using the Whisper model (sampling using
BeamSearch) and the E2E perf for a representative production input is >
6.5%
Whisper E2E Latency for sample input before (on A100):
<img width="194" alt="image"
src="https://github.com/microsoft/onnxruntime/assets/9969784/84ef59f5-84f2-4277-b9f8-b04c27336642">
Whisper E2E Latency for sample input after (on A100):
<img width="191" alt="image"
src="https://github.com/microsoft/onnxruntime/assets/9969784/ca9fe5d3-f726-403e-b27c-be4ee07e0625">
This feature of loading more data in flight may not always yield gains
and it will be workload dependent. For now, keeping the feature turned
OFF by default. It can be turned ON by the user when needed.
### Motivation and Context
Improve BeamSearch performance on CUDA EP