Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
ggerganov/whisper.cpp
Pull Requests
Commits
gg/objc
arghh
avx512
batched
bench-memcpy
chess
ci/env
copilot/add-duplicate-text-removal
coreml-with-state
cuda-cublas-opts
diarization
distil-support
experiment/model-compression
fa-decoder
feature/debug-gradle-signing
fix_vs_sdl2
fix-bench
fix-coreml-ane
fix-vzip
gg/alloc-enc-results
gg/bench-fix-print
gg/chess
gg/ci-cuda-fix
gg/ci-fix-android
gg/ci-fix-windows
gg/cuda-fix-mmvq
gg/cuda-no-async
gg/disable-cuda-graphs
gg/fix-external-encoder
gg/hipblas-fix
gg/make-fix-glob
gg/objc
gg/prompt-tokens
gg/reduce-ctx-use
gg/wchess
gg/whisper-short-audio-check
ggml-backend
ggml-backend-no-sched
ggml-conv
grammar-debug
guided
java-bindings
large-v3
llama-podcast
macros-cvt-fp16
master
metal
metal-and-alloc
nvblas
parallel-states
quantize-encoder
stream
sync-ggml-25-04-02-2
sync-ggml-25-05-07
sync-ggml-25-05-13
sync-ggml-25-09-30-2
sync-ggml-25-12-12
sync-ggml-25-12-17
talk.llama-coreml
threads
timing
try-fix-abort
word-ts-2
disable
ggerganov
committed
1 year ago
Verified
b67bdc94
try3
ggerganov
committed
1 year ago
Verified
5e966f78
try2
ggerganov
committed
1 year ago
Verified
54005478
examples : try to fix objc CI
ggerganov
committed
1 year ago
Verified
49c389b4
whisper : use backend registry (#0)
ggerganov
committed
1 year ago
37c88027
ggml/sched : do not skip views in pre-assignments
slaren
committed
1 year ago
9db070a3
whisper : adapt to new ggml (wip)
ggerganov
committed
1 year ago
7fd8d9c2
talk-llama : sync llama.cpp
ggerganov
committed
1 year ago
06e059b8
sync : ggml
ggerganov
committed
1 year ago
c9f49d5f
ggml : sync resolve (skip) (#0)
ggerganov
committed
1 year ago
f4c1d7df
Add required ggml-base and backend libs to cmake pkg (llama/10407)
bandoti
committed
1 year ago
339b8e55
cuda : fix CUDA_FLAGS not being applied (llama/10403)
slaren
committed
1 year ago
5f6d6919
sycl : Add option to set the SYCL architecture for all targets (llama/10266)
Rbiessy
committed
1 year ago
8ee76773
vulkan: Optimize soft_max (llama/10301)
jeffbolznv
committed
1 year ago
45f1f914
sycl: Revert MUL_MAT_OP support changes (llama/10385)
Alberto Cabrera Pérez
committed
1 year ago
53589c8f
cuda : only use native when supported by cmake (llama/10389)
slaren
committed
1 year ago
7ac2f17f
vulkan: remove use of null initializer (llama/10372)
jeffbolznv
committed
1 year ago
48862c7b
metal : fox offset integer overflows in im2col (ggml/1015)
pminev
committed
1 year ago
44f7d9f4
Vulkan: Fix device info output format specifiers (llama/10366)
0cc4m
committed
1 year ago
fd123025
metal : add `GGML_UNARY_OP_ELU` kernel (ggml/1018)
PABannier
committed
1 year ago
f80bef46
CUDA: fix MMV kernel being used for FP16 src1 (llama/10357)
JohannesGaessler
committed
1 year ago
161b4435
CMake: fix typo in comment [no ci] (llama/10360)
JohannesGaessler
committed
1 year ago
ef7fbe1c
llama : only use default buffer types for the KV cache (llama/10358)
slaren
committed
1 year ago
0879d359
metal : refactor kernel args into structs (llama/10238)
ggerganov
committed
1 year ago
2a444dc5
ggml : fix undefined reference to 'getcpu' (llama/10354)
FirstTimeEZ
committed
1 year ago
45cf1634
CUDA: remove DMMV, consolidate F16 mult mat vec (llama/10318)
JohannesGaessler
committed
1 year ago
dcb2922d
CMake: default to -arch=native for CUDA build (llama/10320)
JohannesGaessler
committed
1 year ago
3c5c7511
ggml : fix possible buffer use after free in sched reserve (llama/9930)
slaren
committed
1 year ago
24ad19d0
ggml : inttypes.h -> cinttypes (llama/0)
ggerganov
committed
1 year ago
bd574b05
ggml : adapt AMX to tensor->grad removal (llama/0)
ggerganov
committed
1 year ago
7e0eafcb
Older