llama.cpp
2ddc9bbe
- Merge branch 'master' into gg/flash-attn
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
1 year ago
Merge branch 'master' into gg/flash-attn
References
#5021 - ggml : add Flash Attention
Author
ggerganov
Parents
3d03bcb7
d3bac7d5
Files
30
.github/workflows
build.yml
editorconfig.yml
.gitignore
CMakeLists.txt
README-sycl.md
README.md
common
common.cpp
common.h
train.cpp
examples
batched-bench
batched-bench.cpp
llama-bench
llama-bench.cpp
llava
MobileVLM-README.md
server
server.cpp
sycl
ls-sycl-device.cpp
win-build-sycl.bat
win-run-llama2.bat
ggml-cuda.cu
ggml-metal.m
ggml-metal.metal
ggml-sycl.cpp
ggml-sycl.h
ggml-vulkan-shaders.hpp
ggml-vulkan.cpp
ggml.c
ggml.h
ggml_vk_generate_shaders.py
llama.cpp
llama.h
scripts
install-oneapi.bat
tests
test-backend-ops.cpp
Loading