Remove autograd copy_ specific isFloatingPoint (#28279)
Summary:
Remove autograd copy_ specific isFloatingPoint and use
c10's isFloatingType (and isComplexType).
Before this, .to or .copy_ would drop requires_grad for bfloat16
as the floating types were only considered to be double, float,
and half.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28279
Differential Revision: D18176084
Pulled By: izdeby
fbshipit-source-id: 8a005a6105e4a827be5c8163135e693a7daae4f4