pytorch
efbb854e - [PyTorch] Avoid std::string in TORCH_CHECK when possible (#52221)

Commit
4 years ago
[PyTorch] Avoid std::string in TORCH_CHECK when possible (#52221) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/52221 The previous code forced a `std::string` to be created even when the default message or a user-provided string literal message was used. Now it's not forced and we don't need an outlined lambda in those cases either. ghstack-source-id: 121877056 Test Plan: Compare assembly for ``` #include <c10/util/Exception.h> void f(bool b) { TORCH_CHECK(b, "message"); } void g(bool b) { TORCH_CHECK(b); } void h(bool b) { TORCH_CHECK(b, "message", random()); } ``` before/after in fbcode optimized build. Before: P174696735 After: P174696840 For `f()` and `g()`, we go from a call to an outlined lambda that did a bunch of `std::string` creation to a load of a string constant before calling `torchCheckFail`. This is a clear improvement. For `h()`, results are mixed: we save a bunch of *extra* string goop in the outlined lambda and instead call `c10::detail::_str_wrapper` directly. This is good for overall size. However, we no longer outline the call to `random()`, which is less than ideal. I hope to recover the ability to fully outline the `random()` call in future diffs; this is just thorny enough that I don't want to cram even more into one diff. Added automated test to make sure `TORCH_CHECK` and `TORCH_INTERNAL_ASSERT` only evaluate their arguments once. Profiled AdIndexer mergenet benchmark in perf to check that `IValue::toTensor` is still getting inlined. Reviewed By: bhosmer Differential Revision: D26380783 fbshipit-source-id: 288860772423994ac739a8f33e2c09f718e8dd38
Author
Parents
Loading