llama.cpp
71564139 - Support multiple GPUs (split mode) on SYCL backend (#5806)

Commit
1 year ago
Support multiple GPUs (split mode) on SYCL backend (#5806) * suport multiple cards: split-mode - layer|row * rm warning * rebase with master, support tow new OPs, close feature for -sm=row, fix for unit test * update news * fix merge error * update according to review comments
Parents
  • File
    README-sycl.md
  • common
    • File
      common.cpp
  • examples
    • llama-bench
      • File
        llama-bench.cpp
    • sycl
      • File
        ls-sycl-device.cpp
      • File
        run-llama2.sh
  • File
    ggml-sycl.cpp
  • File
    ggml-sycl.h
  • File
    llama.cpp
Loading