pytorch
9a3e411a - More rigorous mixed overloads on SymInt (#100008)

Commit
2 years ago
More rigorous mixed overloads on SymInt (#100008) Previously the change to aten/src/ATen/native/LossNLL.cpp eventually resulted in a double / SymInt division, which ended up calling the int64_t / SymInt overload, truncating the double (bad!) By adding overloads for all the int/float types, we avoid this situation from happening in the future. Signed-off-by: Edward Z. Yang <ezyang@meta.com> Pull Request resolved: https://github.com/pytorch/pytorch/pull/100008 Approved by: https://github.com/albanD
Author
Committer
Parents
Loading