pytorch
983d4f6f - [Vulkan] Enable QInt8 weights and test quantized convolution with QInt8 weights and QInt32 bias (#90441)

Commit
2 years ago
[Vulkan] Enable QInt8 weights and test quantized convolution with QInt8 weights and QInt32 bias (#90441) Summary: - Enable convolution with QInt8 weights - Modify test_quantized_conv2d function to allow testing with QInt8 weights and QInt32 bias. - Added multiple tests for regular, depthwise and pointwise convolution with QInt8 weights and QInt32 bias. Test Plan: On Mac ``` cd ~/fbsource buck1 run -c pt.vulkan_full_precision=1 //xplat/caffe2:pt_vulkan_quantized_api_test_binAppleMac\#macosx-arm64 ``` On Android ``` cd ~/fbsource buck1 build -c ndk.custom_libcxx=false -c pt.enable_qpl=0 -c pt.vulkan_full_precision=1 //xplat/caffe2:pt_vulkan_quantized_api_test_binAndroid\#android-arm64 --show-output adb push buck-out/gen/xplat/caffe2/pt_vulkan_quantized_api_test_binAndroid\#android-arm64 /data/local/tmp/vulkan_quantized_api_test adb shell "/data/local/tmp/vulkan_quantized_api_test" ``` Reviewed By: kimishpatel Differential Revision: D41562053 Pull Request resolved: https://github.com/pytorch/pytorch/pull/90441 Approved by: https://github.com/kimishpatel
Committer
Parents
Loading