[TensorPipe] Fix timeout computation (#38928)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/38928
The original code was
```
steady_clock_time_point earliestTimeout = std::chrono::steady_clock::now() + kLargeTimeDuration;
if (std::chrono::steady_clock::now() >= earliestTimeout) {
break;
}
if (!timeoutMap_.empty()) {
earliestTimeout = timeoutMap_.begin()->first;
}
timeoutThreadCV_.wait_until(lock, earliestTimeout);
```
which meant we'd never break the loop, as that required `std::chrono::steady_clock::now()` to be *smaller* than `std::chrono::steady_clock::now() + kLargeTimeDuration`.
The fixed code looks like:
```
steady_clock_time_point earliestTimeout = std::chrono::steady_clock::now() + kLargeTimeDuration;
if (!timeoutMap_.empty()) {
earliestTimeout = timeoutMap_.begin()->first;
}
if (std::chrono::steady_clock::now() >= earliestTimeout) {
break;
}
timeoutThreadCV_.wait_until(lock, earliestTimeout);
```
but by staring at it for a second it becomes clear that the code behaves very differently based on whether `timeoutMap_.empty()`, so I think that for better readability we should reflect that in the code, making that `if` the main one. This then allows us to do a timeout-less wait if there are no messages, which avoids the hacky `kLargeTimeDuration`.
ghstack-source-id: 104760685
Test Plan: eyes
Differential Revision: D21703021
fbshipit-source-id: 0c5062b714c92b956376ae2a8372223fd0d9f871