[PyTorch] Sync TORCH_INTERNAL_ASSERT optis with TORCH_CHECK (#52226)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52226
This gets TORCH_INTERNAL_ASSERT to parity with TORCH_CHECK in terms of optimization for 0 or 1 argument.
ghstack-source-id: 121877054
(Note: this ignores all push blocking failures!)
Test Plan:
Compare generated assembly for
```
#include <c10/util/Exception.h>
void f(bool b) {
TORCH_INTERNAL_ASSERT(b, "message");
}
void g(bool b) {
TORCH_INTERNAL_ASSERT(b);
}
void h(bool b) {
TORCH_INTERNAL_ASSERT(b, "message", random());
}
```
before/after this diff.
Before: P174916324
After: P174916411
Before, f and g called out to outlined lambdas to build
std::strings. After, they load string constants and call
torchInternalAssertFail. Similarly, h calls random() and c10::detail::_str_wrapper() inline and then calls out to torchInternalAssertFail. As with D26380783 (https://github.com/pytorch/pytorch/commit/efbb854ed8a9df35c0ea896c2d216ab3da10d677), I hope to solve the problem of outlining the random & _str_wrapper calls separately.
Profile AdIndexer benchmark & verify that toTensor() is still inlined (it calls TORCH_INTERNAL_ASSERT with an integer argument, like `h` above).
Reviewed By: bhosmer
Differential Revision: D26410575
fbshipit-source-id: f82ffec8d302c9a51f7a82c65bc698fab01e1765