llama.cpp
Loading models directly into VRAM, norm calculation on GPUs, broadcasting for ggml_mul
#1483
Merged

Loading