vllm
f38ee34a
- [feat] Enable mm caching for transformers backend (#21358)
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
142 days ago
[feat] Enable mm caching for transformers backend (#21358) Signed-off-by: raushan <raushan@huggingface.co>
References
#21358 - [feat] Enable mm caching for transformers backend
Author
zucchini-nlp
Parents
b194557a
Loading