text-generation-inference
52e48739 - Remove vLLM dependency for CUDA (#2751)

Commit
1 year ago
Remove vLLM dependency for CUDA (#2751) * Remove vLLM dependency for CUDA This change adds `attention-kernels` as a dependency for paged attention and cache reshaping. With that, we don't use vLLM anywhere for CUDA. Tested run (since we don't have paged attention in CI): ``` ❯ ATTENTION=paged python -m pytest integration-tests -k "llama and awq" --release [...] 5 snapshots passed. ``` * Fix clippy warning
Author
Parents
Loading