Allow user to disable built-in fuser when using TorchDynamo (#81731)
Pytorch's built-in fuser seems have higher priority than my fuser registered via
```cpp
torch::jit::RegisterPass pass([accelerator_symbol](std::shared_ptr<torch::jit::Graph>& g) {
OrtFuseGraph(g, Accelerator::Supported, accelerator_symbol);
});
```
With this PR, I can reuse `aot_autograd` backend in TorchDynamo with my own JIT fuser. My custom context is
```python
class AOTAutogradOrtFusionWithContext:
"""Pass nvfuser context to TorchDynamo"""
def __init__(self):
self.backend_ctx_ctor = lambda: torch.jit.fuser("none")
def __call__(self, gm: torch.fx.GraphModule, example_inputs):
return AOTAutogradMemoryEfficientFusion.compile_fn(gm, example_inputs)
aot_autograd_ort_strategy = AOTAutogradOrtFusionWithContext()
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81731
Approved by: https://github.com/davidberard98