pytorch
e698a634 - Enabled amin & amax for float16 & bfloat16 (#52579)

Commit
3 years ago
Enabled amin & amax for float16 & bfloat16 (#52579) Summary: 1. Enabled `amax` & `amin` for `float16` & `bfloat16` dtypes for both CPU & CUDA. 2. Added `OpInfo`s for `amax` & `amin`. 3. Enabled `test_min_with_inf` & `test_max_with_inf` for both `float16` & `bfloat16`, as they also use `torch.amin` & `torch.amax` respectively. 4. Enabled `test_amax` & `test_amin` for `float16` but not for `bfloat16`, as comparison is done with `numpy`, which doesn't support `bfloat16`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/52579 Reviewed By: pbelevich Differential Revision: D26784194 Pulled By: heitorschueroff fbshipit-source-id: 1050de3e155b83f282fb30b0db6658eead89936c
Parents
Loading