llama.cpp
CUDA: tighter VRAM scratch size for 65b/70b
#2551
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
1
Changes
View On
GitHub
CUDA: tighter VRAM scratch size for 65b/70b
#2551
JohannesGaessler
merged 1 commit into
ggml-org:master
from
JohannesGaessler:cuda-tighter-65b-70b-scratch
CUDA: tighter VRAM scratch size for 65b/70b
5d8b7659
ggerganov
approved these changes on 2023-08-08
JohannesGaessler
merged
acfc5478
into master
2 years ago
Login to write a write a comment.
Login via GitHub
Reviewers
ggerganov
Assignees
No one assigned
Labels
None yet
Milestone
No milestone
Login to write a write a comment.
Login via GitHub