Refactor IR to separate Node base and Derived TsNode (#65869)
Node base class should be as simple and light as possible, in order to still be useful
within other core lazytensor classes, such as graph execution, LazyTensor wrappers,
utils, Tensor Method impls.
Backend-specific derived classes may provide additional functionality useful during
lowering (which is a backend-specific process anyway).
A lot of other changes were made in the process, mostly to clean up unused functionality,
modularize the design more, or clarify APIs.
- modifies codegen scripts to accept customizable 'backend node' class
- remove shape methods from Output and Value structs
- add helper to TsNode for casting Output::node to a TsNode* and
accessing TsNode::Shape
- apply GetShapeFromTsOutput, Value helper to all usages in
ts_node_lowering.cpp
- add a GetShapeFromTsNode helper to deal with one usage in index_ops
- remove all traces of Shape class from ir.h
- compute node hashes in derived class, just store them in base class
- rename hash_ to dag_hash_ to clarify its purpose
- change Node ctor to accept node_hash and dag_hash computed by
BackendNode
- Remove unused APIs from NodeBase (uses(), ReplaceOperand, ReplaceAllUsesWith)
- Change OpList to c10::ArrayRef
- Change NodeCast to const
- Move operands() related stuff to derived TsNode: Later, we may avoid storing these as vectors in
the node, and instead use the codegenerated classes to directly implement operands() from
their underlying operand fields. For now, since there are too many
non-codegen IR classes in play, we keep it this way in TsNode as a
convenience.