text-generation-webui
Add llama.cpp GPU offload option
#2060
Merged
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Overview
Commits
5
Changes
View On
GitHub
Hide Minimap (CTRL+M)
Files
4
Threads
README.md
docs
llama.cpp-models.md
modules
llamacpp_model.py
shared.py
Loading comments...
Loading