Go
Home
Pricing
FAQ
Install
Home
Pricing
FAQ
Install
Login
via GitHub
huggingface/text-generation-inference
Pull Requests
Commits
feat/improve_max_tokens
20250708-ci-fixes
add_L4
add_api_key
add_batch_dimension
add_chunked_atn
add_chunked_attn
add_deepseekv3
add_gptq_docs
add_integration_test
add_readme_dashboard
add_tunable_prefill
add_vlm_chunking
add-chat-response-format
add-google-cloud-provider
add-quickstart-script
add-rotary-embed-tests
add-small-ttft-script
add-test-for-warmup-and-kvcache
adding_docs
adjust-mllama-test-output
adjust-where-request-max-tokens-is-defaulted
aiter_kernels
amd-ci-fx
auto_length
automodel-supports-flash-paged-attention
avoid-cuda-graph-during-warmup-if-oom
avoid-zero-seed
backends/trtllm
backends/trtllm-executor
baichuan2-13b
bnb4
bugfix/add_tools_prompt
bugfix/moe-kernels-imports
bugfix/phi-exl2
bump-client-0.6.2
bump-kernel-versions
bump-poetry-and-requirements
chunked_attn_l4
ci_amd
ci_amd2
ci_amd3
ci_amd4
ci2
ci-amihalik-update-chat-completion-messages
ci-new-cluster
ci-patch
ci-run-openai-function-calling-compatible-support
ci-update_xpu_image
ci-xpu
ci-xpu2
close_dl_thread
compat_logger
compile-grammar-in-router
cuda_ipc_allreduce
debug/gemma2
debug-gpt2
debug-request-id
debug-torch-23
debugging-timeouts
deploy/aml
dev
development-guide
dummy
enable_non_divisible_embeddings
enable-non-grammar-constrained-tools
enable-qwen2vl-video
enable-transformers-vlm
exl2
experiment/moe
explore-static-triton-kernels
explore-t4-gemma-issues
feat/add-load-test
feat/attention_sinks
feat/backend_abstraction
feat/backend_feature
feat/better_tokens
feat/cuda_12
feat/flash_decoding
feat/improve_max_tokens
feat/max_queue_size
feat/page_re_alloc
feat/parse_logs
feat/support_deepspeed
feat-backend-llamacpp
feature/machete
feature/moe-kernels
feature/no_repeat_ngram_size_ci
feature/no_repeat_ngram_size
feature/phi-3-small
feature/prefix
feature/radix-prefix-cache
feature/radix-prefix-cache-bench
feature/vlm-prefix-caching
fix/allow-top-p-0
fix/avoid_record_streams
fix_default_arg
fix_exl2
fix_fp8_llama3.2
fix_leak
fix_mistral2
fix_neox_rotary_emb
fix/op-trace-id
fix/parse-mamba-config
fix_phi3
fix-cudagraph-bug
fix-gemma-tokenization
fix-grammar-cleanup-bug
fix-grammar-fsm-batching
fix-mixtral-adapter-loading
fix-release-tests
fix-repack-for-marlin
fix-tool-call-def
fix-tp
fix-version-install
flashinfer
flashinfer-0.2.5
fp8_kvcache
fp8_rocm
gaudi_llama4_tmp
git_v2.1.0
git_v2.1.1
git_v2.2.0
git_v2.3.0
git_v2.3.1
git_v2.4.0
git_v2.4.1
git_v3.0.0
git_v3.0.1
git_v3.0.2
git_v3.1.0
git_v3.2.2
git_v3.2.3
git_v3.3.3
git_v3.3.4
git_v3.3.5
git_2.0.4
git_3.1.1
git_3.2.0
git_3.2.1
git_3.3.0
git_3.3.1
git_3.3.2
improve_defaults
improve_launcher_defaults
improve-docs
improve-dynamic-message-content
improve-json-schema-field
improve-tool-call-and-response-ids
inlcude-latest-release-on-commit-builds-tags
ipex-moe
kvrouter
kvrouter-endpoints
llama-fused-compiled-mlp
main
maintenance/docker-network
maintenance/merge-vlm-input-prep
mamba2
martinigoyanes-fix-frequency-penalty
medusa
megatron
message-more-info
mi300-temp
mllama
model_compat_log
more_logs
multi-lora
new_minor_version
nix/cargo-clippy
nix/docker2
nix_integration_tests
nix/pytorch-2.5.1
nix_test2
no_root_user
no_root_user2
op-compilation-benchmarking
origin/slind_window_fix
osanseviero-patch-1
pip-installable
pr-1869-ci-run
pr-2076-ci-run
pr-2290-ci-runner
pr-2366-ci-branch
pr-2444-ci-branch
pr-2517-ci-branch
pr-2711-ci-branch
pr-2784-ci-branch
pr-2840-ci-branch
pr-2954-ci-branch
pr-3002-ci-branch
pr-3004-ci-branch
pr-3018-ci-branch
precompile-kernels-workflow
prefix_chunk
prefix_default
proxy_sse_engine_state
quantization
quantization-0.1
refactor-lora-linear
release-3.2.4
remove_post_load_weights
response-header-metrics
revert
rocm_6.2_fixes
rocm-ci-build
router-grammar-compile
s3-cache
self-generating-docs
set-num-blocks
simpler_exllama
skip-mistral-test
speculative
streaming_conceptual
support-granite-vision
support-logit-bias-in-chat
support-phi3-small
support-phi-model
support-pre-compile-kernels
temp_work
test_docs
test_rocm
test-batch-speedup-amount
tmp_invariants
tmp_medusa
tmp_torch_compile
transformers-ci
triton_fix
trtllm/executor_stats
trtllm-stop-words
tuna
update_docs2
update_internal_version
update_peft
update_readme
update-flake-deps-and-logit-processor
update-jsonschema
upgrade_mlp_speculator
upgrade-outlines
use_g6
use_updated_kernels
vllm/setup
zstd
add logic to queue
OlivierDehaene
committed
2 years ago
a9634953
feat(server): improve max tokens calculation
OlivierDehaene
committed
2 years ago
4f460e5b
fix(benchmarking): fix benchmarking tool
OlivierDehaene
committed
2 years ago
7de8a377
Starting some routing tests. (#233)
Narsil
committed
2 years ago
Verified
45344244
fix(python-client): add auth headers to is supported requests (#234)
OlivierDehaene
committed
2 years ago
Verified
323546df
chore(server): update safetensors version (#235)
OlivierDehaene
committed
2 years ago
Verified
37b64a5c
feat(router): add endpoint info to /info route (#228)
OlivierDehaene
committed
2 years ago
Verified
8b182eb9
feat(router): use number of tokens in batch as input for dynamic batching (#226)
OlivierDehaene
committed
2 years ago
Verified
ebc74d56
chore(server): update huggingface-hub (#227)
OlivierDehaene
committed
2 years ago
Verified
98a3e0d1
feat(server): reduce memory requirement (#214)
njhill
committed
2 years ago
Verified
4a7dd408
v0.6.0 (#222)
OlivierDehaene
committed
2 years ago
Verified
6ded76a4
misc: update to rust 1.69 (#221)
OlivierDehaene
committed
2 years ago
Verified
97df0c7b
fix(server): fix flash batch filtering (#220)
OlivierDehaene
committed
2 years ago
Verified
4b460e72
fix(server): fix flash causal (#219)
OlivierDehaene
committed
2 years ago
Verified
1ffea36e
fix(server): fix flash causal (#218)
OlivierDehaene
committed
2 years ago
Verified
86bca365
fix(server): cleanup new flash past_key_values logic (#217)
OlivierDehaene
committed
2 years ago
Verified
afc5b999
fix(server): fix past key values logic (#216)
OlivierDehaene
committed
2 years ago
Verified
db4cb5e4
feat(router): add device and dtype info (#215)
OlivierDehaene
committed
2 years ago
Verified
343437c7
feat(server): flash attention past key value optimizations (#213)
njhill
committed
2 years ago
Verified
ac8c0f6f
fix(ci): fix sha in docker image (#212)
OlivierDehaene
committed
2 years ago
Verified
274513e6
feat(router): drop requests when client closes the channel (#202)
OlivierDehaene
committed
2 years ago
Verified
709d8936
feat(router): add git sha to info route (#208)
OlivierDehaene
committed
2 years ago
Verified
b6ee0ec7
fix(router): add auth token to get model info (#207)
OlivierDehaene
committed
2 years ago
Verified
252f42c1
fix(docker): remove unused dependencies (#205)
OlivierDehaene
committed
2 years ago
Verified
6837b2eb
fix(server): fix hf_transfer issue with private repos (#203)
OlivierDehaene
committed
2 years ago
Verified
5d27f525
feat(server): check cuda capability when importing flash models (#201)
OlivierDehaene
committed
2 years ago
Verified
a88c54bb
feat(server): support quantization for flash models (#200)
OlivierDehaene
committed
2 years ago
Verified
e14ae3b5
feat(router): add info route (#196)
OlivierDehaene
committed
2 years ago
Verified
2475aede
feat(python-client): get list of currently deployed tgi models using the inference API (#191)
OlivierDehaene
committed
2 years ago
Verified
b927244e
fix(router): fix truncation (#190)
OlivierDehaene
committed
2 years ago
Verified
c13b9d87
Older