xla
52be55d2 - Replace Usages of xla:util::Hash* with torch::lazy::Hash (#3148)

Commit
3 years ago
Replace Usages of xla:util::Hash* with torch::lazy::Hash (#3148) As part of migration of core lazy tensor functionality to PyTorch core, this uses the newly added torch::lazy::Hash functions from PyTorch. Note: while the Hash* functions are largely identical to the original XLA ones, the underlying uint128 class is from protobuf instead of absl, since it was a slightly smaller dependency to ingest and get building on multiple OS/platform combinations for PyTorch.
Author
Parents
Loading