481 | 494 | ||
482 | yield chunk | ||
483 | output: Optional[Output] = chunk | ||
484 | try: | ||
485 | for chunk in stream: | ||
486 | yield chunk | ||
487 | try: | ||
488 | output = output + chunk # type: ignore | ||
489 | except TypeError: | ||
490 | output = None | ||
491 | except BaseException as e: | ||
492 | run_manager.on_chain_error(e) | ||
493 | raise e | ||
494 | 495 | run_manager.on_chain_end(output) | |
495 | 496 | ||
496 | 497 | async def astream( |
note: this still needs updating
6 | class ModifierMessage(BaseMessage): | ||
7 | """Message responsible for modifying other messages (deleting / updating.)""" | ||
8 | |||
9 | def __init__(self, id: str, **kwargs: Any) -> None: |
should we raise an error if any other kwargs are specified
i considered this -- there is an issue w/ content
field during serialization/deserialization. since it's required on the base message, we still need to pass it here at deserialization. i can raise for any other keys though!
7 | """Message responsible for modifying other messages (deleting / updating.)""" | ||
8 | |||
9 | def __init__(self, id: str, **kwargs: Any) -> None: | ||
10 | return super().__init__("modifier", id=id) |
is "modifier" the content here? should we just leave it blank?
yea, can just be an empty string, wasn't sure which is less confusing ""
If we don't need the content, do we want to change the hierarchy so that content is only for messages that contain content? Or could the content be helpful here?
dunno if this is a good idea but: we could add the message removal logic to BaseChatModel. ie check for RemoveMessages in the input and perform specified removals before passing messages on. that way user doesn't have to write this logic everywhere
256 | 259 | generation += chunk | |
260 | |||
261 | if chunk.message.id != last_chunk_id: | ||
262 | seen_chunk_ids.append(chunk.message.id) |
let's just use a set?
set makes sense conceptually, but i think we might want to maintain the order?
Login to write a write a comment.
This change adds a new message type
RemoveMessage
(andRemoveMessageChunk
) to allow the following behaviors:(1) in
langhcain_core
-- when using model w/ fallbacks, if one of the models fails mid-stream, we would want to roll back chunks already streamed(2) in
langgraph
-- to allow user or graph node to manually modify the stateExample usage:
(1)
langchain_core
--RunnableWithFallbacks
Output:
(2)
langgraph