[ao][sparsity] Data Sparsifier Benchmarking: Forward time evaluation of the sparse dlrm model with torch.sparse (#81780)
The objective is to check if introducing torch sparse coo in the sparse dlrm model improves the inference time
over different sparsity levels.
The ```evaluate_forward_time.py``` makes use of the ```sparse_model_metadata.csv``` file dumped by the
```evaluate_disk_savings.py```. Records forward time for the sparse dlrm model using sparse coo
tensors and without using sparse coo tensors and dumps it into a csv file ```dlrm_forward_time_info.csv```
**Results**: The dlrm model with sparse coo tensor is slower (roughly 2x).
After running, `evaluate_memory_savings.py`, run: `python evaluate_forward_time.py --raw_data_file=<path_to_raw_data_txt_file> --processed_data_file=<path_to_kaggleAdDisplayChallenge_processed.npz> --sparse_model_metadata=<path_to_sparse_model_metadata_csv>`
Dependencies: DLRM Repository (https://github.com/facebookresearch/dlrm)
Test Plan: None
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81780
Approved by: https://github.com/z-a-f