Add mt-bench (#75)
What this PR does:
- Uses custom metrics and tasks to add llm a as judge
- adds multi turn generation
- Adds mt-bench metric
This implementation uses mt-bench prompts from [InflectionAI](https://github.com/InflectionAI/Inflection-Benchmarks). The code is inspired from the original implementation of mt-bench with notable differences.
- mt-bench uses a custom-made chat templating system, we use the tokenizer
- mt-bench uses an old version of the openai API, we use the newest one, with very simplified logic for chat prompt formating. We can easily add more models to act as judge.
- We do not use varying temperature based on the sample we are evaluating. All samples are generated using `do_sample=False` and temperature set to `0.0`.