[fx2trt] support for ne, logical_not, logical_and (#75444)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/75444
as titled
1. support logical_and, logical_not
2. replace eq,gt,lt with python operator in acc_ops due to the fact that torch op needs input to be torch.Tensor but python op does not
3. add more test cases
4. add individual ne op without using combination of existing ops since there are limitations. For ex, in lowering it to equal+logical_not. It will fail the last test case in test_ne.py. The failure reason is that logical_not needs the input to be a tensor.
Also we can not use equal+operator.not since not is not tracable in FX with the error "symbolically traced variables cannot be used as inputs to control flow"
We also can not use equal+operator.invert since operator.invert(True)=-2
(Note: this ignores all push blocking failures!)
Test Plan:
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_ne
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_logical_and
buck test mode/dev-nosan deeplearning/trt/fx2trt_oss/test/converters:test_unary_ops
Reviewed By: 842974287
Differential Revision: D35232917
fbshipit-source-id: d4601a6883c977caa263f67b9db86cbc862d4780
(cherry picked from commit a72c5ace3856905f7123422426a1270cf9fa8743)