llama.cpp
a3fa0358 - server: print actual model name in 'model not found" error (#19117)

Commit
3 days ago
server: print actual model name in 'model not found" error (#19117) Experimenting with AI, my environment gets messy fast and it's not always easy to know what model my software is trying to load. This helps with troubleshooting. before: Error: { code = 400, message = "model not found", type = "invalid_request_error" } After: Error: { code = 400, message = "model 'toto' not found", type = "invalid_request_error" }
Author
Parents
Loading