llama.cpp
748ee9fe
- ggml : fix multi-threaded clamp_f32 (#11824)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
133 days ago
ggml : fix multi-threaded clamp_f32 (#11824) * Bug fix for clamp_f32 When using tensors larger than 1d clamp operation does not work due to the restriction of returning if ith is not 0. * Bug fix for clamp_f32 * Bug fix for clamp_f32
References
#11824 - Bug fix for clamp_f32
Author
Burton2000
Parents
198b1ec6
Files
1
ggml/src/ggml-cpu
ggml-cpu.c
Loading