Enable fp16 half on timm models (#802)
Summary:
This PR enables fp16 (half) by default on `timm` models.
Now all torchvision, huggingface, and timm models support fp16 on CUDA inference by default.
Pull Request resolved: https://github.com/pytorch/benchmark/pull/802
Reviewed By: dzhulgakov
Differential Revision: D34912949
Pulled By: xuzhao9
fbshipit-source-id: 74f31db8cfb3df430380e77b9ff28eae75d1a89a