langchain
a2cc9b55 - core[patch]: Remove autoupgrade to addable dict in Runnable/RunnableLambda/RunnablePassthrough transform (#20677)

Commit
1 year ago
core[patch]: Remove autoupgrade to addable dict in Runnable/RunnableLambda/RunnablePassthrough transform (#20677) Causes an issue for this code ```python from langchain.chat_models.openai import ChatOpenAI from langchain.output_parsers.openai_tools import JsonOutputToolsParser from langchain.schema import SystemMessage prompt = SystemMessage(content="You are a nice assistant.") + "{question}" llm = ChatOpenAI( model_kwargs={ "tools": [ { "type": "function", "function": { "name": "web_search", "description": "Searches the web for the answer to the question.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "The question to search for.", }, }, }, }, } ], }, streaming=True, ) parser = JsonOutputToolsParser(first_tool_only=True) llm_chain = prompt | llm | parser | (lambda x: x) for chunk in llm_chain.stream({"question": "tell me more about turtles"}): print(chunk) # message = llm_chain.invoke({"question": "tell me more about turtles"}) # print(message) ``` Instead by definition, we'll assume that RunnableLambdas consume the entire stream and that if the stream isn't addable then it's the last message of the stream that's in the usable format. --- If users want to use addable dicts, they can wrap the dict in an AddableDict class. --- Likely, need to follow up with the same change for other places in the code that do the upgrade
Author
Parents
Loading