onnxruntime
fc5e65a2 - Add quantization support for GPT2 past state and use model to generate outputs in OpTester (#4340)

Commit
5 years ago
Add quantization support for GPT2 past state and use model to generate outputs in OpTester (#4340) * Make quantization support GPT2 past state * Make OpTester to be able to generate reference outputs with a model. With it, there is no need to compute outputs manually, which are impossible for some cases.
Author
Parents
Loading