llama.cpp
7afdfc9b
- ggml-cpu: Enable FP16 MMA kernels on PPC (#19060)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
27 days ago
ggml-cpu: Enable FP16 MMA kernels on PPC (#19060)
References
#19060 - ggml-cpu: Enable FP16 MMA kernels on PPC
Author
shalinib-ibm
Parents
94eeb596
Loading