pytorch
78f30386 - Implement Swish(SiLU) operator in FP16

Commit
3 years ago
Implement Swish(SiLU) operator in FP16 Summary: Used Caffe2 Swish implemenmtation to implement the operator. Will need to resolve the error introduced. ``` test_quantized_swish_2D (tests.operators.testQuantizedSilu.TestSiLU) ... input: (tensor([[-6.0000, -5.9961, -5.9922, ..., -5.7734, -5.7695, -5.7656], [-5.7617, -5.7539, -5.7500, ..., -5.5352, -5.5312, -5.5234], [-5.5195, -5.5156, -5.5117, ..., -5.2930, -5.2891, -5.2852], ..., [ 5.2852, 5.2891, 5.2930, ..., 5.5117, 5.5156, 5.5195], [ 5.5234, 5.5312, 5.5352, ..., 5.7500, 5.7539, 5.7617], [ 5.7656, 5.7695, 5.7734, ..., 5.9922, 5.9961, 6.0000]]),) base_res: tensor([[-0.0148, -0.0149, -0.0149, ..., -0.0179, -0.0180, -0.0180], [-0.0181, -0.0182, -0.0182, ..., -0.0218, -0.0218, -0.0220], [-0.0220, -0.0221, -0.0222, ..., -0.0265, -0.0266, -0.0266], ..., [ 5.2585, 5.2625, 5.2665, ..., 5.4895, 5.4935, 5.4975], [ 5.5015, 5.5094, 5.5134, ..., 5.7318, 5.7357, 5.7437], [ 5.7476, 5.7516, 5.7555, ..., 5.9773, 5.9812, 5.9852]]) tnco_res: tensor([[-0.0148, -0.0149, -0.0149, ..., -0.0179, -0.0180, -0.0180], [-0.0181, -0.0182, -0.0182, ..., -0.0218, -0.0218, -0.0220], [-0.0220, -0.0221, -0.0222, ..., -0.0265, -0.0265, -0.0266], ..., [ 5.2578, 5.2617, 5.2656, ..., 5.4922, 5.4922, 5.4961], [ 5.5000, 5.5078, 5.5156, ..., 5.7305, 5.7383, 5.7422], [ 5.7461, 5.7500, 5.7539, ..., 5.9766, 5.9805, 5.9844]]) nnpi_res: tensor([[-0.0148, -0.0149, -0.0149, ..., -0.0179, -0.0180, -0.0180], [-0.0181, -0.0182, -0.0182, ..., -0.0218, -0.0218, -0.0220], [-0.0220, -0.0221, -0.0222, ..., -0.0265, -0.0266, -0.0266], ..., [ 5.2585, 5.2625, 5.2665, ..., 5.4895, 5.4935, 5.4975], [ 5.5015, 5.5094, 5.5134, ..., 5.7318, 5.7357, 5.7437], [ 5.7476, 5.7516, 5.7555, ..., 5.9773, 5.9812, 5.9852]]) diff: tensor([[4.1956e-06, 9.8441e-07, 6.0154e-06, ..., 4.2785e-06, 7.6480e-06, 1.0842e-05], [1.3988e-06, 4.1034e-06, 6.5863e-06, ..., 5.3961e-06, 2.9635e-06, 1.0209e-05], [1.2219e-06, 7.9758e-06, 1.7386e-05, ..., 3.0547e-07, 2.2141e-05, 1.4316e-05], ..., [7.0286e-04, 7.8678e-04, 8.7023e-04, ..., 2.6422e-03, 1.3347e-03, 1.4052e-03], [1.4753e-03, 1.6141e-03, 2.2225e-03, ..., 1.2884e-03, 2.5592e-03, 1.4634e-03], [1.5216e-03, 1.5793e-03, 1.6365e-03, ..., 6.9284e-04, 7.4100e-04, 7.8964e-04]]) nnpi traced graph: graph(%self : __torch__.tests.operators.testQuantizedSilu.SiLUModel, %x : Float(*, *, requires_grad=0, device=cpu)): %3 : None = prim::Constant() %4 : bool = prim::Constant[value=0]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %5 : Device = prim::Constant[value="cpu"]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %6 : int = prim::Constant[value=0]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %7 : int = prim::Constant[value=6]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %8 : Float(*, *, requires_grad=0, device=cpu) = aten::zeros_like(%x, %7, %6, %5, %4, %3) # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %input : Float(*, *, requires_grad=0, device=cpu) = glow::FusionGroup_0(%x, %8) %10 : Tensor = aten::silu(%input) # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/torch/nn/functional.py:1804:0 return (%10) with glow::FusionGroup_0 = graph(%0 : Float(*, *, requires_grad=0, device=cpu), %1 : Float(*, *, requires_grad=0, device=cpu)): %2 : int = prim::Constant[value=1]() %input : Float(*, *, requires_grad=0, device=cpu) = aten::add(%0, %1, %2) # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %4 : int = prim::Constant[value=1]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 return (%input) tnco traced graph: graph(%self : __torch__.tests.operators.testQuantizedSilu.___torch_mangle_0.SiLUModel, %x : Float(*, *, requires_grad=0, device=cpu)): %2 : int = prim::Constant[value=1]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %3 : None = prim::Constant() %4 : bool = prim::Constant[value=0]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %5 : Device = prim::Constant[value="cpu"]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %6 : int = prim::Constant[value=0]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %7 : int = prim::Constant[value=6]() # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %8 : Float(*, *, requires_grad=0, device=cpu) = aten::zeros_like(%x, %7, %6, %5, %4, %3) # /data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py:13:0 %12 : Tensor = fakeNNPI::addFP16(%x, %8, %2) %11 : Tensor = fakeNNPI::siluFP16(%12) return (%11) FAIL ====================================================================== FAIL: test_quantized_swish_2D (tests.operators.testQuantizedSilu.TestSiLU) ---------------------------------------------------------------------- Traceback (most recent call last): File "/data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/operators/testQuantizedSilu.py", line 26, in test_quantized_swish_2D validate_nnpi_model(model, (x,), expected_ops, []) File "/data/users/kaus/fbsource/fbcode/buck-out/dev/gen/glow/fb/torch_glow/custom_nnpi_ops/testQuantizedSilu#binary,link-tree/tests/utils.py", line 73, in validate_nnpi_model assert is_equal AssertionError ``` Test Plan: Run test with buck test mode/dev //glow/fb/torch_glow/custom_nnpi_ops:testQuantizedSilu Reviewed By: hyuen Differential Revision: D25981369 fbshipit-source-id: dd0f3686b3cbf6fc575c959c7661125ecbf0b0db
Author
Parents
Loading