llama.cpp
4e87962e
- mtmd : fix glm-edge redundant token count (#13139)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
131 days ago
mtmd : fix glm-edge redundant token count (#13139) * mtmd : fix glm-edge redundant token count * fix chat template * temporary disable GLMEdge test chat tmpl
References
#13139 - mtmd : fix glm-edge redundant token count
Author
ngxson
Parents
fb0471d1
Loading