optimum
22b10a4e - Add llama onnx export & onnxruntime support (#975)

Commit
2 years ago
Add llama onnx export & onnxruntime support (#975) * Add config for Llama * Register Llama in tasks * Add llama and it's corresponding tiny-random model from hf into tests * Add tests for modeling and exporters * Add entry for a Llama * Add llama into supported normalized configs * Add optimization support for llama * Change tiny-llama source to trl-internal-testing * Change tiny-llama source to trl-internal-testing * can I push? * fix tests * fix task map --------- Co-authored-by: Chernenko Ruslan <ractyfree@gmail.com>
Author
Parents
Loading