Enable fx int8 for most models on cpu device (#1485)
Summary:
Enable fx int8 for most models on cpu device
Works for Roadmap https://github.com/pytorch/benchmark/issues/1293 for fx int8 support, below is an example on CLX machine
```
$ python run.py alexnet -d cpu -t eval --precision fp32 -m eager
Running eval method from alexnet on cpu in eager mode with input batch size 128.
CPU Total Wall Time: 93.586 milliseconds
CPU Peak Memory: 6.3857 GB
$ python run.py alexnet -d cpu -t eval --precision fx_int8 -m eager
Running eval method from alexnet on cpu in eager mode with input batch size 128.
CPU Total Wall Time: 21.892 milliseconds
CPU Peak Memory: 1.4150 GB
$ python run.py alexnet -d cpu -t eval --precision fp32 -m jit
Running eval method from alexnet on cpu in jit mode with input batch size 128.
CPU Total Wall Time: 70.556 milliseconds
CPU Peak Memory: 1.5918 GB
Correctness: True
$ python run.py alexnet -d cpu -t eval --precision fx_int8 -m jit
Running eval method from alexnet on cpu in jit mode with input batch size 128.
CPU Total Wall Time: 21.176 milliseconds
CPU Peak Memory: 1.6758 GB
Correctness: True
$ python run.py alexnet -d cpu -t eval --precision fx_int8 -m jit --quant-engine fbgemm
Running eval method from alexnet on cpu in jit mode with input batch size 128.
CPU Total Wall Time: 29.487 milliseconds
CPU Peak Memory: 1.6777 GB
Correctness: True
```
Pull Request resolved: https://github.com/pytorch/benchmark/pull/1485
Reviewed By: weiwangmeta
Differential Revision: D44256938
Pulled By: xuzhao9
fbshipit-source-id: 1754028660b6908e66616531a42571e9c08690e6