Add llama onnx export & onnxruntime support (#975)
* Add config for Llama
* Register Llama in tasks
* Add llama and it's corresponding tiny-random model from hf into tests
* Add tests for modeling and exporters
* Add entry for a Llama
* Add llama into supported normalized configs
* Add optimization support for llama
* Change tiny-llama source to trl-internal-testing
* Change tiny-llama source to trl-internal-testing
* can I push?
* fix tests
* fix task map
---------
Co-authored-by: Chernenko Ruslan <ractyfree@gmail.com>