text-generation-webui
Add llama.cpp GPU offload option
#2060
Merged

  • File
    README.md
  • docs
    • File
      llama.cpp-models.md
  • modules
    • File
      llamacpp_model.py
    • File
      shared.py

Loading comments...