llama.cpp
Introduce bfloat16 support
#6412
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
8
Changes
View On
GitHub
Introduce bfloat16 support
#6412
ggerganov
merged 8 commits into
ggml-org:master
from
jart:bf16
jart
force pushed
1 year ago
jart
force pushed
1 year ago
jart
force pushed
1 year ago
jart
force pushed
to
07cebab5
1 year ago
jart
force pushed
from
07cebab5
1 year ago
jart
force pushed
1 year ago
ggerganov
approved these changes on 2024-04-09
jart
force pushed
1 year ago
ggerganov
approved these changes on 2024-04-25
jart
force pushed
1 year ago
jart
force pushed
to
68614cec
1 year ago
jart
force pushed
from
ed0f47b3
to
82aebcf0
1 year ago
Introduce bfloat16 support
55e962a2
Remove GGML code that's not needed
823d45ad
Minimize the GGML API surface area for BF16
180bfcd8
Remove bf16 luts
d6892c48
Make the GGML header look nicer
ce0442d7
Fix documentation
bc278c8a
Apply ggerganov's fixes for test-backend-ops
2741a997
Add BF16 code for new ggml_validate_row_data() function
632624e9
jart
force pushed
from
82aebcf0
to
632624e9
1 year ago
ggerganov
merged
38554160
into master
1 year ago
mofosyne
added
Tensor Encoding Scheme
mofosyne
added
Review Complexity : High
Login to write a write a comment.
Login via GitHub
Reviewers
ggerganov
Assignees
No one assigned
Labels
Review Complexity : High
Tensor Encoding Scheme
Milestone
No milestone
Login to write a write a comment.
Login via GitHub