extend lt and gt to handle tensors (#80925)
This PR is to deal with the following situation:
In a HF model, we can have a node:
`gt = getitem_1 > 1`
Here, we are reasoning about the runtime value of getitem_1 to know which path to take in trace-time.
We can see this because this node is not used anywhere else later in the code.
But there is a different node in the same model:
`lt = arange < view_1`
Here, we are meant to reason about the shapes, because `arange` and `view_1 `are tensor shapes, so constraints about the node are meant to carry over the tensor shapes instead of the runtime values. We can see that this node is used later.
It seems that if we assume that function arguments are tensors though, then we can deduce from the arguments to this node which kind of situation we are in. For example, if both arguments are nodes, and each node is represented by a `DVar `(dimension variable) in our system (since we already processed the children), then we know we are in the first situation and we can generate a constraint of the form: Dvar(getitem_1) > 1. It seems reasonable to assume this won't be used later.
Similarly, if we have two nodes but the children are represented by `'TVar'` (tensor variables) then we know that we are in the second situation. So we can generate the same constraints as we do for tensor addition. We will have the form gt = shape where shape will be constraint in a way similar to addition. Here, we want to store the value of gt because it may be used later.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80925
Approved by: https://github.com/jansel