llama.cpp
8ad92dc1
- ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Hide Minimap (CTRL+M)
Commit
1 year ago
ggml : switch to padded F16 mask for ggml_soft_max, ggml_flash_attn_ext
References
#5021 - ggml : add Flash Attention
Author
ggerganov
Committer
ggerganov
Parents
2ddc9bbe
Files
7
ggml-cuda.cu
ggml-metal.m
ggml-metal.metal
ggml.c
ggml.h
llama.cpp
tests
test-backend-ops.cpp
Loading