llama.cpp
1a8c8795
- ci : check if there is enough VRAM (#3596)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
ci : check if there is enough VRAM (#3596) ggml-ci
References
#3596 - ci : add M1 node
Author
ggerganov
Parents
b016596d
Loading