llama.cpp
vulkan: Fix ErrorOutOfHostMemory on Intel GPU when loading large models with --no-mmap
#20059
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
10
Changes
View On
GitHub
Commits
Changed to reuse command buffers to fix crashing on Intel GPU
rillomas
committed
27 days ago
Removed unused parameter
rillomas
committed
27 days ago
Fixed compile error and minor mistake
rillomas
committed
26 days ago
Fix logging
rillomas
committed
26 days ago
Changing to use usage flag per command buffer
rillomas
committed
21 days ago
fixed style
rillomas
committed
21 days ago
added buffer reset
rillomas
committed
21 days ago
Removed cmd_buffer_idx for reuse consistency
rillomas
committed
20 days ago
Merge remote-tracking branch 'origin/master' into fix-async-tensor-crash
rillomas
committed
20 days ago
Fixed style
rillomas
committed
20 days ago
Loading