pytorch
deb69898 - [fx-acc] add optimize_quantization to FX graph opts (#65929)

Commit
4 years ago
[fx-acc] add optimize_quantization to FX graph opts (#65929) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65929 This adds a set of quantize/dequantize graph optimizations. Test Plan: ``` buck test mode/opt glow/fb/fx/graph_opts:test_fx_graph_opts ``` ``` Parsing buck files: finished in 0.8 sec Building: finished in 3.0 sec (100%) 8475/80926 jobs, 0/80926 updated Total time: 3.9 sec More details at https://www.internalfb.com/intern/buck/build/9dd6193b-d99c-4d2a-8ef8-4d71380916e7 BUILD SUCCEEDED Tpx test run coordinator for Facebook. See https://fburl.com/tpx for details. Running with tpx session id: b5a83d2a-8870-400e-b21e-3286967d1f4a Trace available for this run at /tmp/tpx-20211018-165956.836274/trace.log Started reporting to test run: https://www.internalfb.com/intern/testinfra/testrun/4222124724048882 ✓ ListingSuccess: glow/fb/fx/graph_opts:test_fx_graph_opts - main (3.152) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_transpose_to_reshape_1_optimizable (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestTransposeToReshape) (0.100) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_transpose_to_reshape_0_identity (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestTransposeToReshape) (0.017) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_one_0 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.154) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_one_1 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.140) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantization_2_QuantizePerChannel_Dequantize_X_RescaleQuantized_X_ (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantization) (0.422) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_one_3 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.296) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_dequantize_clamp_remove_one_3 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.288) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_dequantize_clamp_remove_one_1 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.433) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_clamp_tensor (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.346) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantization_1_Quantize_Dequantize_X_RescaleQuantized_X_ (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantization) (0.403) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_transpose_to_reshape_2_unoptimizable (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestTransposeToReshape) (0.117) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_remove_one_1 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.415) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_remove_one_3 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.280) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantization_3_Dequantize_Quantize_Dequantize_X_Dequantize_rescale_X_Dequantize_X_ (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantization) (0.150) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_one_6 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.133) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_dequantize_clamp_remove_one_2 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.523) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_dequantize_clamp_remove_one_0 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.569) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantization_4_Rescale_QuantizeNode_QuantizeNode_ (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantization) (0.815) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_one_5 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.295) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_one_4 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.308) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_one_2 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.213) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_remove_one_2 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.230) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantization_0_Dequantize_Quantize_X_X (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantization) (0.336) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_remove_one_0 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.486) ✓ Pass: glow/fb/fx/graph_opts:test_fx_graph_opts - test_optimize_quantize_clamp_ignore_one_7 (glow.fb.fx.graph_opts.tests.test_fx_graph_opts.TestOptimizeQuantizeClamp) (0.306) Summary Pass: 25 ListingSuccess: 1 If you need help understanding your runs, please follow the wiki: https://fburl.com/posting_in_tpx_users Finished test run: https://www.internalfb.com/intern/testinfra/testrun/4222124724048882 ``` # Before ``` Model before opt. graph(): %x : [#users=1] = placeholder[target=x] %quantize_per_tensor_2 : [#users=1] = call_function[target=torch.fx.experimental.fx_acc.acc_ops.quantize_per_tensor](args = (), kwargs = {input: %x, acc_out_ty: ((8, 4, 2), torch.qint32, False, (8, 2, 1), torch.contiguous_format, True, {scale: 1.000001e-05, zero_point: 0, qscheme: torch.per_tensor_affine})}) %dequantize_1 : [#users=1] = call_function[target=torch.fx.experimental.fx_acc.acc_ops.dequantize](args = (), kwargs = {input: %quantize_per_tensor_2}) %quantize_per_tensor_3 : [#users=1] = call_function[target=torch.fx.experimental.fx_acc.acc_ops.quantize_per_tensor](args = (), kwargs = {input: %dequantize_1, acc_out_ty: ((8, 4, 2), torch.qint32, False, (8, 2, 1), torch.contiguous_format, True, {scale: 1e-05, zero_point: 0, qscheme: torch.per_tensor_affine})}) return quantize_per_tensor_3 opcode name target args kwargs ------------- --------------------- ------------------------------------------------ ------------------------ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- placeholder x x () {} call_function quantize_per_tensor_2 <function quantize_per_tensor at 0x7f66030a34c0> () {'input': x, 'acc_out_ty': ((8, 4, 2), torch.qint32, False, (8, 2, 1), torch.contiguous_format, True, {'scale': 1.000001e-05, 'zero_point': 0, 'qscheme': torch.per_tensor_affine})} call_function dequantize_1 <function dequantize at 0x7f66030a35e0> () {'input': quantize_per_tensor_2} call_function quantize_per_tensor_3 <function quantize_per_tensor at 0x7f66030a34c0> () {'input': dequantize_1, 'acc_out_ty': ((8, 4, 2), torch.qint32, False, (8, 2, 1), torch.contiguous_format, True, {'scale': 1e-05, 'zero_point': 0, 'qscheme': torch.per_tensor_affine})} output output output (quantize_per_tensor_3,) {} ``` # After ``` Model after opt. graph(): %x : [#users=1] = placeholder[target=x] %quantize_per_tensor_2 : [#users=1] = call_function[target=torch.fx.experimental.fx_acc.acc_ops.quantize_per_tensor](args = (), kwargs = {input: %x, acc_out_ty: ((8, 4, 2), torch.qint32, False, (8, 2, 1), torch.contiguous_format, True, {scale: 1e-05, zero_point: 0, qscheme: torch.per_tensor_affine})}) return quantize_per_tensor_2 opcode name target args kwargs ------------- --------------------- ------------------------------------------------ ------------------------ ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- placeholder x x () {} call_function quantize_per_tensor_2 <function quantize_per_tensor at 0x7f66030a34c0> () {'input': x, 'acc_out_ty': ((8, 4, 2), torch.qint32, False, (8, 2, 1), torch.contiguous_format, True, {'scale': 1e-05, 'zero_point': 0, 'qscheme': torch.per_tensor_affine})} output output output (quantize_per_tensor_2,) {} ``` Reviewed By: jfix71 Differential Revision: D30945732 fbshipit-source-id: 427cd4215b546e1d6c5362734bb7de93d0c0b1b9
Author
Parents
Loading