llama.cpp
gguf-py: add support for I8, I16 and I32
#6045
Merged

Loading