optimum
2742bf14 - Add the ability to choose the ONNX runtime execution provider in `ORTModel` (#137)

Comment changes are shownComment changes are hidden
Commit
3 years ago
Add the ability to choose the ONNX runtime execution provider in `ORTModel` (#137) * added option for onnxruntime execution provider * formatting * better description * changed ort provider to model arguments * added documentation * changed ort provider name * formatting * remove wrong files * trigger actions * added error catch in case the given arguments for cpu-gpu optimization are contradictory * remove unused files * correct wrong catch * styling Co-authored-by: Felix Marty <felix@huggingface.co>
Author
Parents
  • examples/onnxruntime
    • optimization
      • question-answering
        • File
          run_qa.py
      • text-classification
        • File
          run_glue.py
      • token-classification
        • File
          run_ner.py
    • quantization
      • question-answering
        • File
          run_qa.py
      • text-classification
        • File
          run_glue.py
      • token-classification
        • File
          run_ner.py
  • optimum/onnxruntime
    • File
      model.py