llama.cpp
b617cfd2
- ggml-alloc : fix leak when reusing a tensor with a larger size (#16679)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
15 days ago
ggml-alloc : fix leak when reusing a tensor with a larger size (#16679)
References
#16679 - ggml-alloc : fix leak when reusing a tensor with a larger size
Author
slaren
Parents
79068501
Loading