pytorch
3adc8f8c - Enable min & max for Float16 & BFloat16 (#51244)

Commit
4 years ago
Enable min & max for Float16 & BFloat16 (#51244) Summary: Fixes https://github.com/pytorch/pytorch/issues/50790. Added `min()` & `max()` support for `Float16` & `BFloat16`. CUDA already supported these ops on `Float16`, so the other three combinations had to be enabled. `OpInfo`s for `min` & `max` were also added, and their sample inputs were removed from `method_tests()`. ### MORE INFO The (slightly) long-term goal is to add dispatch for `min()` & `max()` related operations on CPU & CUDA for `Float16` & `BFloat16`, wherever they aren't present already: 1. `amin()` 2. `argmax()` 3. `amax()` 4. `argmin()` 5. `torch._aminmax()` 6. `torch.clamp()` on CPU. Was already supported on CUDA 7. `min()` (in this PR) 8. `max()` (in this PR) 9. `minimum()` 10. `maximum()` I'll submit separate PRs for the other ops. Pull Request resolved: https://github.com/pytorch/pytorch/pull/51244 Reviewed By: jbschlosser Differential Revision: D26503455 Pulled By: anjali411 fbshipit-source-id: c32247f214e9272ca2e4322a23337874e737b140
Parents
Loading