whitelist autogradanynonzero (#28852)
Summary:
prim::AutogradAnyNonZero is optimized away under normal circumstances (a graph executor specializes tensor arguments and runs `specializeAutogradZero`), so the change should be backward compatible for as long as we are running the original executor.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28852
Differential Revision: D18213118
Pulled By: Krovatkin
fbshipit-source-id: 223f172c59e5f2b05460db7de98edbadc45dd73d