Add quantization support for GPT2 past state and use model to generate outputs in OpTester (#4340)
* Make quantization support GPT2 past state
* Make OpTester to be able to generate reference outputs with a model. With it, there is no need to compute outputs manually, which are impossible for some cases.