llama.cpp
381ee195
- finetune : fix ggml_allocr lifetimes (tmp workaround) (#5033)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
finetune : fix ggml_allocr lifetimes (tmp workaround) (#5033) * Fix issue with alloc causing max_compute_size to be calculated * remove ggml_allocr_free as suggested in issue #4791
References
#5033 - Fix issue #4791 alloc causes compute_size to be calculated incorrectly in train-text-from-scratch, end result core dump
Author
bzuzo
Parents
a5cacb22
Loading