reorganize imports to make model_analyzer more independent. (#1000)
Summary:
From now, the model analyzer could be copied and imported more independently and easily.
To integrate model_analyzer to your own project, you can copy the folder https://github.com/pytorch/benchmark/tree/main/components/model_analyzer to your project, and follow the example below.
```python
from .model_analyzer.TorchBenchAnalyzer import ModelAnalyzer
# create an instance
model_analyzer = ModelAnalyzer()
model_analyzer.set_export_csv_name("all_records.csv")
# FLOPS related gpu metric is enabled by default while others not
model_analyzer.add_mem_throughput_metrics()
# DCGM will be connected after this function
model_analyzer.start_monitor()
# your own code to profile
run_app()
# stop and aggregate the profiling results
model_analyzer.stop_monitor()
model_analyzer.aggregate()
model_analyzer.export_all_records_to_csv()
tflops = model_analyzer.calculate_flops()
print('{:<20} {:>20}'.format("FLOPS:", "%.4f TFLOPs per second" % tflops, sep=''))
```
Pull Request resolved: https://github.com/pytorch/benchmark/pull/1000
Reviewed By: xuzhao9
Differential Revision: D37498346
Pulled By: FindHao
fbshipit-source-id: ef1d3efe5a5ef502e5109c70bb98558b7c008e11