onnxruntime
ef1aaa36 - Adding interface for batched integer gemm (#7249)

Commit
4 years ago
Adding interface for batched integer gemm (#7249) Parallelize MinMax, Quantize and batched quantize GEMM Performance problem identified in T5 decoder model (quantized). DynamicMatMul operator is identified as the culprit. This operator spend time on getting MinMax of a Tensor, quantize a tensor, and perform a batched qgemm. All of these can be parallelized. Currently GEMM is parallelized. However, in batched GEMM, we sequentially call GEMM multiple times. This causes multiple starting and ending of parallel sections, which can be slow sometimes. So we made the following changes: Parallel task partition no longer depends on degree of parallelism, only on shape of the matrices. In a single GEMM, perform 2D partition of the multiplication, along panel lines, to reduce repeated packing. For batched GEMM, all parallel tasks are executed in a single parallel section, reducing the cost of starting threads and waiting for them to finish.
Author
Parents
Loading