fix(langgraph): skip reasoning/activity messages on AG-UI → LangGraph round-trip#1647
Open
tylerslaton wants to merge 1 commit into
Open
fix(langgraph): skip reasoning/activity messages on AG-UI → LangGraph round-trip#1647tylerslaton wants to merge 1 commit into
tylerslaton wants to merge 1 commit into
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
@ag-ui/a2a-middleware
@ag-ui/a2ui-middleware
@ag-ui/event-throttle-middleware
@ag-ui/mcp-apps-middleware
@ag-ui/middleware-starter
@ag-ui/a2a
@ag-ui/adk
@ag-ui/ag2
@ag-ui/agno
@ag-ui/aws-strands
@ag-ui/claude-agent-sdk
@ag-ui/crewai
@ag-ui/langchain
@ag-ui/langgraph
@ag-ui/langroid
@ag-ui/llamaindex
@ag-ui/mastra
@ag-ui/pydantic-ai
@ag-ui/server-starter
@ag-ui/server-starter-all-features
@ag-ui/vercel-ai-sdk
create-ag-ui-app
@ag-ui/client
@ag-ui/core
@ag-ui/encoder
@ag-ui/proto
commit: |
…on AG-UI → LangGraph
`aguiMessagesToLangChain` previously threw "message role is not
supported." for any role outside `user/assistant/system/tool`. That's
fine when the AG-UI message history only contains chat-shaped roles,
but the converter is invoked on every multi-turn call — and the AG-UI
history the frontend sends back includes ALL the messages it has
accumulated, including ones produced by the agent's own AG-UI events.
When an agent emits `REASONING_MESSAGE_*` events (any reasoning model
on the Responses API, Anthropic thinking, Bedrock `reasoning_content`,
etc.), the AG-UI client materialises a `role: "reasoning"` message in
the conversation history. On the next user turn the converter throws
because it has no case for `reasoning`. The runtime catches the throw,
finalises the stream with `RUN_ERROR { code: INCOMPLETE_STREAM,
message: "message role is not supported." }`, and the user sees a red
toast — Turn 1 worked, Turn 2 is dead. `role: "activity"`
(ACTIVITY_MESSAGE_*) hits the same path with the same outcome.
Reasoning carries provider-specific encrypted state in `encryptedValue`
(OpenAI Responses API `encrypted_content`, Anthropic extended-thinking
`signature`) that providers use to maintain reasoning continuity across
turns. Simply dropping these messages would make the model forget its
prior chain-of-thought on the next turn.
Fix: forward each AG-UI reasoning message as a standalone AIMessage
whose content carries a `type: "reasoning"` block with the rendered
summary text and the encrypted state. langchain-openai's
`_construct_responses_api_input` already recognises these blocks and
threads them through to the Responses API as reasoning input items, so
the model sees its own prior reasoning state on subsequent turns.
Activity messages are display-only progress events with no LLM-relevant
content and no analogue in LangGraph; skip them rather than throwing.
OpenAI's `developer` role (newer-model replacement for `system`) was
also throwing; map it to a SystemMessage so demo agents that set
developer prompts work out of the box. Genuinely unknown roles still
throw.
Verified end-to-end against the CopilotKit `tool-rendering-reasoning-
chain` showcase (multi-turn weather Tokyo → SFO/JFK with
`init_chat_model("openai:gpt-X", use_responses_api=True,
reasoning={...})`):
- Before: Turn 1 ok, Turn 2 hangs with the role-not-supported toast.
- After: both turns complete, both render reasoning blocks + per-tool
cards. LangGraph thread state on Turn 2 shows an `ai: [reasoning]`
message between Turn 1 and Turn 2's user message, sourced from the
AG-UI history — proving the agent receives its own prior reasoning
state.
a10bed5 to
d02e3bb
Compare
AlemTuzlak
approved these changes
May 11, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
aguiMessagesToLangChaininintegrations/langgraph/typescript/src/utils.tsthrows"message role is not supported."for any role outsideuser/assistant/system/tool. That's fine when the AG-UI message history only contains chat-shaped roles, but the converter is invoked on every multi-turn call — and the AG-UI history the frontend sends back includes all the messages it has accumulated, including ones produced by the agent's own AG-UI events.When an agent emits
REASONING_MESSAGE_*events (any reasoning model on the Responses API, Anthropic thinking, Bedrockreasoning_content, etc.), the AG-UI client materialises arole: "reasoning"message in the conversation history. On the very next user turn, the converter throws because it has no case forreasoning. The runtime catches the throw, finalises the stream with:…and the user sees a red toast — Turn 1 worked, Turn 2 is dead.
role: "activity"(e.g.ACTIVITY_MESSAGE_*) hits the same path with the same outcome, even without reasoning involved.Fix
Reasoning carries provider-specific encrypted state in
encryptedValue(OpenAI Responses APIencrypted_content, Anthropic extended-thinkingsignature) that providers use to maintain reasoning continuity across turns. Dropping the message would make the model "forget" its prior chain-of-thought on the next turn.Forward AG-UI reasoning messages as standalone
AIMessages whose content is atype: "reasoning"block carrying the rendered summary text plus the encrypted state.langchain-openai's_construct_responses_api_inputalready recognises these blocks and threads them through to the Responses API as reasoning input items, so the model sees its own prior reasoning state.Activity messages are display-only progress events (status pills, streaming progress bars, etc.) emitted via AG-UI events. They have no LLM-relevant content and no analogue in LangGraph's message types; skip rather than throw so multi-turn flows with activity history don't break.
Developer role (the newer-model OpenAI replacement for
system) was also throwing; map it to aSystemMessageso demo agents that set developer prompts work out of the box.The default-throw path is preserved for genuinely unknown roles.
Test plan
pnpm test(full langgraph integration suite) — 142/142 passmessage-conversion.test.ts:should forward reasoning messages as AI messages with reasoning content blocksshould forward reasoning without encryptedValue (no signature key)should skip activity messages instead of throwingshould convert developer message to systemtool-rendering-reasoning-chainshowcase demo (multi-turn: weather Tokyo → SFO/JFK, both turns emit reasoning summaries via OpenAI Responses API,gpt-Xwithuse_responses_api=True,reasoning={effort:"low",summary:"auto"}).ai: [reasoning]message is present, sourced from the AG-UI history by this fix. The model receives its prior reasoning state on Turn 2.Before:
After:
Why this is a real regression for any reasoning-model multi-turn flow
The bug has been latent because most demos don't surface reasoning, but any integration that uses a reasoning-capable model AND has more than one user turn hits this on Turn 2. The CopilotKit showcase repro is reproducible in <30 seconds — open
/demos/tool-rendering-reasoning-chain, ask Tokyo weather, then ask for SFO→JFK flights, and the toast appears.🤖 Generated with Claude Code