pytorch
30f20268 - [inductor] Promote half-precision CPU constants to float (#91224)

Commit
2 years ago
[inductor] Promote half-precision CPU constants to float (#91224) Currently `aten.where` can fail with the following C++ compiler error: ``` error: operands to '?:' have different types 'c10::Half' and 'float' ``` This happens because `ops.load` is overridden to cast Half inputs to float, but `ops.constant` will load a Half without promoting to float. Pull Request resolved: https://github.com/pytorch/pytorch/pull/91224 Approved by: https://github.com/jgong5, https://github.com/EikanWang, https://github.com/ngimel
Author
Committer
Parents
Loading