llama.cpp
HellaSwag: split token evaluation into batches if needed
#2681
Merged

Loading