[JIT] Disable conv-add-relu fusion for cuDNN7 when model uses fp16 (#56579)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56579
On earlier cuDNN versions, when a model uses fp16, the
performance after conv-add-relu fusion regresses. Let's just
disable the fusion for fp16 if cuDNN version is older than v8.
Test Plan: Tested for fp16 models on Nvidia Tesla T4
Reviewed By: ZolotukhinM
Differential Revision: D27915514
Pulled By: desertfire
fbshipit-source-id: 1c0081a80540c507e608216c90bc74c486c7008d