[FX] torch.fx.symbolic_trace patching improvements and `math.*` support (#50793)
Summary:
This contains some improvements and refactoring to how patching is done in `torch.fx.symbolic_trace`.
1) Functions from `math.*` are now supported without needing to call `torch.fx.wrap()`. `wrap()` actually errors on some of these function because they are written in C and don't have `__code__` requiring use of the string version. `math` usage is relatively common, for example [BERT uses math.sqrt here](https://github.com/pytorch/benchmark/blob/6f79061bd145eeaa9b4a75847939901fd245ddf9/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/attention/single.py#L16). Both `math.sqrt()` and `from math import sqrt` (copying to module namespace) are supported. When modules are called FX now searches the module's global scope to find methods to patch.
2) [Guarded behind `env FX_PATCH_GETITEM=1`] Fixes a failed trace of [PositionalEmbedding from BERT](https://github.com/pytorch/benchmark/blob/6f79061bd145eeaa9b4a75847939901fd245ddf9/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/position.py#L24), which failed to trace with the error `TypeError: slice indices must be integers or None or have an __index__ method` (a Proxy() is getting passed into `Tensor.__getitem__`). See https://github.com/pytorch/pytorch/issues/50710 for why this is disabled by default.
3) Support for automatically wrapping methods that may have been copied to a different module scope via an import like `from foo import wrapped_function`. This also isn't exposed in `torch.fx.wrap`, but is used to implement `math.*` support.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50793
Test Plan: Added unittests to check each feature
Reviewed By: jamesr66a
Differential Revision: D25999788
Pulled By: jansel
fbshipit-source-id: f1ce11a69b7d97f26c9e2741c6acf9c513a84467