llvm-project
50653e5a - [tosa] : Enhance tosa.slice folding for dynamic dims. (#184615)

Commit
52 days ago
[tosa] : Enhance tosa.slice folding for dynamic dims. (#184615) Source IR: ``` func.func @main(%arg0: tensor<?x112x64x112xf32>) -> tensor<?x113x65x112xf32> { %0 = tosa.const_shape {values = dense<[0, 0, 1, 1, 1, 1, 0, 0]> : tensor<8xindex>} : () -> !tosa.shape<8> %1 = "tosa.const"() <{values = dense<0.000000e+00> : tensor<1xf32>}> : () -> tensor<1xf32> %2 = tosa.pad %arg0, %0, %1 : (tensor<?x112x64x112xf32>, !tosa.shape<8>, tensor<1xf32>) -> tensor<?x114x66x112xf32> %3 = tosa.const_shape {values = dense<0> : tensor<4xindex>} : () -> !tosa.shape<4> %4 = tosa.const_shape {values = dense<[-1, 113, 65, 112]> : tensor<4xindex>} : () -> !tosa.shape<4> %5 = tosa.slice %2, %3, %4 : (tensor<?x114x66x112xf32>, !tosa.shape<4>, !tosa.shape<4>) -> tensor<?x113x65x112xf32> return %5 : tensor<?x113x65x112xf32> } ``` when canonicalized produces ``` $> mlir-opt --canonicalize func.func @main(%arg0: tensor<?x112x64x112xf32>) -> tensor<?x113x65x112xf32> { %0 = tosa.const_shape {values = dense<0> : tensor<4xindex>} : () -> !tosa.shape<4> %1 = tosa.const_shape {values = dense<[-1, 113, 65, 112]> : tensor<4xindex>} : () -> !tosa.shape<4> %2 = "tosa.const"() <{values = dense<0.000000e+00> : tensor<1xf32>}> : () -> tensor<1xf32> %3 = tosa.const_shape {values = dense<[0, 0, 1, 0, 1, 0, 0, 0]> : tensor<8xindex>} : () -> !tosa.shape<8> %4 = tosa.pad %arg0, %3, %2 : (tensor<?x112x64x112xf32>, !tosa.shape<8>, tensor<1xf32>) -> tensor<?x113x65x112xf32> %5 = tosa.slice %4, %0, %1 : (tensor<?x113x65x112xf32>, !tosa.shape<4>, !tosa.shape<4>) -> tensor<?x113x65x112xf32> return %5 : tensor<?x113x65x112xf32> } ``` because of the `PadSliceOptimization`. Note that the `tosa.slice` op after the optimization is essentially a no-op. This change, enhances the folder to fold such `tosa.slice` ops. After this change canonicalization produces ``` func.func @main(%arg0: tensor<?x112x64x112xf32>) -> tensor<?x113x65x112xf32> { %0 = "tosa.const"() <{values = dense<0.000000e+00> : tensor<1xf32>}> : () -> tensor<1xf32> %1 = tosa.const_shape {values = dense<[0, 0, 1, 0, 1, 0, 0, 0]> : tensor<8xindex>} : () -> !tosa.shape<8> %2 = tosa.pad %arg0, %1, %0 : (tensor<?x112x64x112xf32>, !tosa.shape<8>, tensor<1xf32>) -> tensor<?x113x65x112xf32> return %2 : tensor<?x113x65x112xf32> } ```
Author
Parents
Loading