Update codegen to use xla::Shape (#4111)
* Update GenXlaLazyIR codegen to use xla::Shape
* Add xla::Shape to custom shape class
* Revert "Update GenXlaLazyIR codegen to use xla::Shape"
This reverts commit 513350350c923dfe9a7c0e1657dac90602e9a177.
* Update xla codegen to have custom GenLazyNativeFuncDefinition generator
* Override build_ir_node() in GenXlaLazyNativeFuncDefinition
* Override gen() in GenXlaLazyIr
* Update existing MakeNode calls to remove torch::lazy::Shape vector
* Remove torch::lazy::Shape vector for _adaptive_avg_pool2d_* ops
* Update _adaptive_avg_pool3d_* ops as well
* Run linter
* Add torch pin
* Remove lazy::Shape related logic in XLATensor::Create
* Clean up scripts/gen_lazy_tensor.py and add some comments
* Update cpp tests to remove torch::lazy:Shape constructor calls
* Run linter again
* Move bitwise ops to ir_codegen
* Use new flag use_lazy_shape in xla lazy codegen
* Update torch pin
* Clean up test_symint.cpp
* Run linter on python files
* Delete .torch_pin