MAINT: np.unique works with f16 directly (#108228)
(follow up on gh-107768)
Remove a f16->f32 workaround from np.unique, since torch.unique and np.unique seem to just work with float16 tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108228
Approved by: https://github.com/lezcano