llama.cpp
ggml-cpu: Faster IQ1 mul_mat_vec on AVX2 using BMI2 instructions
#12154
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
6
Changes
View On
GitHub
Loading