llama.cpp
3a6efdd0
- convert : use f32 outtype for bf16 tensors (#6106)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
1 year ago
convert : use f32 outtype for bf16 tensors (#6106) The old behaviour is to use f16, but bf16 to f16 is not a lossless conversion. Change the outtype to f32 to default to a lossless conversion.
References
#6106 - convert : use f32 outtype for bf16 tensors
Author
Artefact2
Parents
d01b3c4c
Loading