llama.cpp
2ddc9bbe - Merge branch 'master' into gg/flash-attn

Comment changes are shownComment changes are hidden
Commit
1 year ago
Merge branch 'master' into gg/flash-attn
Author
  • .github/workflows
    • File
      build.yml
    • File
      editorconfig.yml
  • File
    .gitignore
  • File
    CMakeLists.txt
  • File
    README-sycl.md
  • File
    README.md
  • common
    • File
      common.cpp
    • File
      common.h
    • File
      train.cpp
  • examples
    • batched-bench
      • File
        batched-bench.cpp
    • llama-bench
      • File
        llama-bench.cpp
    • llava
      • File
        MobileVLM-README.md
    • server
      • File
        server.cpp
    • sycl
      • File
        ls-sycl-device.cpp
      • File
        win-build-sycl.bat
      • File
        win-run-llama2.bat
  • File
    ggml-cuda.cu
  • File
    ggml-metal.m
  • ggml-metal.metal
  • File
    ggml-sycl.cpp
  • File
    ggml-sycl.h
  • File
    ggml-vulkan-shaders.hpp
  • File
    ggml-vulkan.cpp
  • File
    ggml.c
  • File
    ggml.h
  • File
    ggml_vk_generate_shaders.py
  • File
    llama.cpp
  • File
    llama.h
  • scripts
    • File
      install-oneapi.bat
  • tests
    • File
      test-backend-ops.cpp