[qnnpack] Remove redundant fp16 dependency (#68011)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/68011
`qnnpack/operator.h` introduces a dependency on an external library fp16 via `qnnpack/requantization.h`.
Including `qnnpack/operator.h` in `pytorch_qnnpack.h` will make objects who really don't require fp16 depend on it indirectly because they include `pytorch_qnnpack.h`.
This was causing some test and bench targets to fail building for local and android/arm64 (only two tried) using cmake.
This diff moves `qnnpack/operator.h` from `pytorch_qnnpack.h` to `qnnpack_func.h`, and explicitly add `qnnpack/operator.h` in `src/conv-prepack.cc`.
Test Plan: Ran all the tests for local on my devserver, and arm64 on Pixel3a.
Reviewed By: salilsdesai
Differential Revision: D32250984
fbshipit-source-id: 21468d8ef79c90e9876dc00da95383180a1031b5