llama.cpp
ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others)
#15188
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
4
Changes
View On
GitHub
Commits
ggml-rpc: chunk send()/recv() to avoid EINVAL for very large tensors over RPC (macOS & others). Fixes #15055
Shinnosuke Takagi
committed
139 days ago
ggml-rpc: rename RPC_IO_CHUNK->MAX_CHUNK_SIZE, use std::min() for cap, switch to GGML_LOG_ERROR, handle 0-length send/recv
Tak-RS
committed
137 days ago
rpc: drop n==0 special case in send_data(); retry in loop per review
Tak-RS
committed
136 days ago
rpc: remove trailing whitespace in send_data()
Tak-RS
committed
135 days ago
Loading