chat-ui
62ae8a90
- fix: set max_tokens for Qwen3.5-397B-A17B to prevent truncation
Go
Login via GitHub
Home
Pricing
FAQ
Install
Login
via GitHub
Commit
View On
GitHub
Commit
3 days ago
fix: set max_tokens for Qwen3.5-397B-A17B to prevent truncation Together AI defaults to 2048 max_tokens when unset, causing responses to be cut off mid-sentence. Set to 32768 per Qwen's official recommendation for general use.
References
#2133 - fix: set max_tokens for Qwen3.5-397B-A17B to prevent truncation
Author
gary149
Committer
gary149
Parents
4a0fe600
Loading