llama.cpp
[SYCL] Fix WARP_SIZE=16 bug of Intel GPU
#8266
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
8
Changes
View On
GitHub
Commits
fix group_norm ut
luoyu-intel
committed
1 year ago
split softmax
luoyu-intel
committed
1 year ago
fix softmax
luoyu-intel
committed
1 year ago
revert qx_k
luoyu-intel
committed
1 year ago
add concat support condition
luoyu-intel
committed
1 year ago
revert debug code
luoyu-intel
committed
1 year ago
move QK_WARP_SIZE to presets.hpp
luoyu-intel
committed
1 year ago
rebase work_space api
luoyu-intel
committed
1 year ago
Loading