pytorch
da0820e5 - add BFloat16 operators on CPU: range, sinh, cosh, frexp, nan_to_num (#61826)

Commit
3 years ago
add BFloat16 operators on CPU: range, sinh, cosh, frexp, nan_to_num (#61826) Summary: Added BFloat16 support for range, sinh, cosh, frexp, and nan_to_num on CPU, and collected the benchmark data of these OPs(range, sinh, cosh, frexp, and nan_to_num) for BFloat16 and Float32 data type by using the operator_benchmark tool of PyTorch on the platform of Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz Number of cores: 1 core, 28 cores(1 socket) [cosh_sinh_benchmark.txt](https://github.com/pytorch/pytorch/files/6974313/cosh_sinh_benchmark.txt) [frexp_benchmark.txt](https://github.com/pytorch/pytorch/files/6974315/frexp_benchmark.txt) [nan_to_num_benchmark.txt](https://github.com/pytorch/pytorch/files/6974317/nan_to_num_benchmark.txt) [range_benchmark.txt](https://github.com/pytorch/pytorch/files/6974318/range_benchmark.txt) Pull Request resolved: https://github.com/pytorch/pytorch/pull/61826 Reviewed By: saketh-are Differential Revision: D30257259 Pulled By: VitalyFedyunin fbshipit-source-id: 394cd713e6394050a8c90b2160633beb675d71dd
Author
Parents
Loading