Prevent out of bounds access to null LTC operands (#80060)
When constructing a lazy::Node, [null operands (optional values that aren't included) are dropped](https://github.com/pytorch/pytorch/blob/30fb2c4abaaaa966999eab11674f25b18460e609/torch/csrc/lazy/core/ir.cpp#L82-L84), so it’s possible for the stored operand list to be a different length than the one that was passed into the constructor.
This can become a problem during the call to `CanBeReused` in the autogen `LazyIr.h` code. For example:
```
bool CanBeReused(const torch::lazy::Value& input, const c10::optional<torch::lazy::Value>& weight, const c10::optional<torch::lazy::Value>& bias, const c10::optional<torch::lazy::Value>& running_mean, const c10::optional<torch::lazy::Value>& running_var, const bool& training, const double& momentum, const double& eps) const {
size_t i = 0;
std::cout << "Num operands: " << operands().size() << std::endl;
return (operand(i++) == input &&
operand(i++) == weight.value_or(kNullValue) &&
operand(i++) == bias.value_or(kNullValue) &&
operand(i++) == running_mean.value_or(kNullValue) &&
operand(i++) == running_var.value_or(kNullValue) &&
this->training == training &&
this->momentum == momentum &&
this->eps == eps);
}
```
Here we operate under the assumption that the number of operands stored in the `lazy::Node` is equal to the number of operands originally passed into the constructor. Recall that we drop any null operands though, so it’s possible to inadvertently access an invalid index at this point.
This PR addresses this issue by adding a new nullable_operand method which falls back to a null value instead of producing an index error when going out of bounds.
This should solve the issue found at https://github.com/pytorch/pytorch/pull/79637#issuecomment-1162044545
cc: @antoniojkim @ke1337 @wconstab @desertfire
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80060
Approved by: https://github.com/desertfire