langchain
30e2260e - fix(core): Decouple provider prefix from model name in init_chat_mode… (#34046)

Commit
44 days ago
fix(core): Decouple provider prefix from model name in init_chat_mode… (#34046) :…l logic Addresses Issue #34007. Fixes a bug where aliases like 'mistral:' were inferred correctly as a provider but the prefix was not stripped from the model name, causing API 400 errors. Added logic to strip prefix when inference succeeds. **Description** This PR resolves a logic error in `init_chat_model` where inferred provider aliases (specifically `mistral:`) were correctly identified but not stripped from the model string. **The Problem** When passing a string like `mistral:ministral-8b-latest`, the factory logic correctly inferred the provider as `mistralai` but failed to enter the string-splitting block because the alias `mistral` was not in the hardcoded `_SUPPORTED_PROVIDERS` list. This caused the raw string `mistral:ministral-8b-latest` to be passed to the `ChatMistralAI` constructor, resulting in a 400 API error. **The Fix** I updated `_parse_model` in `libs/langchain/langchain/chat_models/base.py`. The logic now attempts to infer the provider from the prefix *before* determining whether to split the string. This ensures that valid aliases trigger the stripping logic, passing only the clean `model_name` to the integration class. **Issue** Fixes #34007 **Dependencies** None. **Verification** Validated locally with a reproduction script: - Input: `mistral:ministral-8b-latest` - Result: Successfully instantiates `ChatMistralAI` with `model="ministral-8b-latest"`. - Validated that standard inputs (e.g., `gpt-4o`) remain unaffected. Co-authored-by: ioop <ioop@Sidharths-MacBook-Air.local>
Author
Parents
Loading