llama.cpp
7f412dab
- enable CPU HBM (#2603)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
2 years ago
enable CPU HBM (#2603) * add cpu hbm support * add memalign 0 byte check * Update ggml.c * Update llama.cpp * ggml : allow ggml_init with 0 size * retrigger ci * fix code style --------- Co-authored-by: Georgi Gerganov <ggerganov@gmail.com>
References
#2603 - enable CPU HBM
Author
jikunshang
Parents
6336d834
Loading