pytorch
4061239f - [qnnpack] Remove redundant fp16 dependency (#67281)

Commit
3 years ago
[qnnpack] Remove redundant fp16 dependency (#67281) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67281 `qnnpack/operator.h` introduces a dependency on an external library fp16 via `qnnpack/requantization.h`. Including `qnnpack/operator.h` in `pytorch_qnnpack.h` will make objects who really don't require fp16 depend on it indirectly because they include `pytorch_qnnpack.h`. This was causing some test and bench targets to fail building for local and android/arm64 (only two tried) using cmake. This diff moves `qnnpack/operator.h` from `pytorch_qnnpack.h` to `qnnpack_func.h`, and explicitly add `qnnpack/operator.h` in `src/conv-prepack.cc`. Test Plan: Ran all the tests for local on my devserver, and arm64 on Pixel3a. Reviewed By: kimishpatel Differential Revision: D31861962 fbshipit-source-id: e1425c7dc3e6700cbe3e46b64898187792555bb7
Author
Parents
Loading