Skip to content

fix(langgraph): skip reasoning/activity messages on AG-UI → LangGraph round-trip#1647

Open
tylerslaton wants to merge 1 commit into
mainfrom
fix/langgraph-skip-reasoning-activity-messages
Open

fix(langgraph): skip reasoning/activity messages on AG-UI → LangGraph round-trip#1647
tylerslaton wants to merge 1 commit into
mainfrom
fix/langgraph-skip-reasoning-activity-messages

Conversation

@tylerslaton
Copy link
Copy Markdown
Contributor

@tylerslaton tylerslaton commented May 9, 2026

Problem

aguiMessagesToLangChain in integrations/langgraph/typescript/src/utils.ts throws "message role is not supported." for any role outside user/assistant/system/tool. That's fine when the AG-UI message history only contains chat-shaped roles, but the converter is invoked on every multi-turn call — and the AG-UI history the frontend sends back includes all the messages it has accumulated, including ones produced by the agent's own AG-UI events.

When an agent emits REASONING_MESSAGE_* events (any reasoning model on the Responses API, Anthropic thinking, Bedrock reasoning_content, etc.), the AG-UI client materialises a role: "reasoning" message in the conversation history. On the very next user turn, the converter throws because it has no case for reasoning. The runtime catches the throw, finalises the stream with:

RUN_ERROR { code: "INCOMPLETE_STREAM", message: "message role is not supported." }

…and the user sees a red toast — Turn 1 worked, Turn 2 is dead.

role: "activity" (e.g. ACTIVITY_MESSAGE_*) hits the same path with the same outcome, even without reasoning involved.

Fix

Reasoning carries provider-specific encrypted state in encryptedValue (OpenAI Responses API encrypted_content, Anthropic extended-thinking signature) that providers use to maintain reasoning continuity across turns. Dropping the message would make the model "forget" its prior chain-of-thought on the next turn.

Forward AG-UI reasoning messages as standalone AIMessages whose content is a type: "reasoning" block carrying the rendered summary text plus the encrypted state. langchain-openai's _construct_responses_api_input already recognises these blocks and threads them through to the Responses API as reasoning input items, so the model sees its own prior reasoning state.

Activity messages are display-only progress events (status pills, streaming progress bars, etc.) emitted via AG-UI events. They have no LLM-relevant content and no analogue in LangGraph's message types; skip rather than throw so multi-turn flows with activity history don't break.

Developer role (the newer-model OpenAI replacement for system) was also throwing; map it to a SystemMessage so demo agents that set developer prompts work out of the box.

The default-throw path is preserved for genuinely unknown roles.

Test plan

  • pnpm test (full langgraph integration suite) — 142/142 pass
  • 4 new regression tests in message-conversion.test.ts:
    • should forward reasoning messages as AI messages with reasoning content blocks
    • should forward reasoning without encryptedValue (no signature key)
    • should skip activity messages instead of throwing
    • should convert developer message to system
  • Verified end-to-end against the CopilotKit tool-rendering-reasoning-chain showcase demo (multi-turn: weather Tokyo → SFO/JFK, both turns emit reasoning summaries via OpenAI Responses API, gpt-X with use_responses_api=True, reasoning={effort:"low",summary:"auto"}).
  • Reasoning continuity confirmed: inspected the LangGraph thread state after Turn 2 — between Turn 1's tool-result+text and Turn 2's user message, an ai: [reasoning] message is present, sourced from the AG-UI history by this fix. The model receives its prior reasoning state on Turn 2.

Before:

  • Turn 1 ok
  • Turn 2 hangs with the role-not-supported toast

After:

  • Both turns complete
  • Both render reasoning blocks + per-tool cards
  • Turn 2 receives Turn 1's reasoning context

Why this is a real regression for any reasoning-model multi-turn flow

The bug has been latent because most demos don't surface reasoning, but any integration that uses a reasoning-capable model AND has more than one user turn hits this on Turn 2. The CopilotKit showcase repro is reproducible in <30 seconds — open /demos/tool-rendering-reasoning-chain, ask Tokyo weather, then ask for SFO→JFK flights, and the toast appears.

🤖 Generated with Claude Code

@tylerslaton tylerslaton requested a review from a team as a code owner May 9, 2026 23:04
@vercel
Copy link
Copy Markdown

vercel Bot commented May 9, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
ag-ui-dojo Ready Ready Preview, Comment May 10, 2026 9:53pm

Request Review

@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new Bot commented May 10, 2026

Open in StackBlitz

@ag-ui/a2a-middleware

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/a2a-middleware@1647

@ag-ui/a2ui-middleware

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/a2ui-middleware@1647

@ag-ui/event-throttle-middleware

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/event-throttle-middleware@1647

@ag-ui/mcp-apps-middleware

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/mcp-apps-middleware@1647

@ag-ui/middleware-starter

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/middleware-starter@1647

@ag-ui/a2a

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/a2a@1647

@ag-ui/adk

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/adk@1647

@ag-ui/ag2

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/ag2@1647

@ag-ui/agno

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/agno@1647

@ag-ui/aws-strands

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/aws-strands@1647

@ag-ui/claude-agent-sdk

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/claude-agent-sdk@1647

@ag-ui/crewai

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/crewai@1647

@ag-ui/langchain

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/langchain@1647

@ag-ui/langgraph

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/langgraph@1647

@ag-ui/langroid

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/langroid@1647

@ag-ui/llamaindex

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/llamaindex@1647

@ag-ui/mastra

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/mastra@1647

@ag-ui/pydantic-ai

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/pydantic-ai@1647

@ag-ui/server-starter

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/server-starter@1647

@ag-ui/server-starter-all-features

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/server-starter-all-features@1647

@ag-ui/vercel-ai-sdk

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/vercel-ai-sdk@1647

create-ag-ui-app

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/create-ag-ui-app@1647

@ag-ui/client

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/client@1647

@ag-ui/core

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/core@1647

@ag-ui/encoder

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/encoder@1647

@ag-ui/proto

pnpm add https://pkg.pr.new/ag-ui-protocol/ag-ui/@ag-ui/proto@1647

commit: d02e3bb

…on AG-UI → LangGraph

`aguiMessagesToLangChain` previously threw "message role is not
supported." for any role outside `user/assistant/system/tool`. That's
fine when the AG-UI message history only contains chat-shaped roles,
but the converter is invoked on every multi-turn call — and the AG-UI
history the frontend sends back includes ALL the messages it has
accumulated, including ones produced by the agent's own AG-UI events.

When an agent emits `REASONING_MESSAGE_*` events (any reasoning model
on the Responses API, Anthropic thinking, Bedrock `reasoning_content`,
etc.), the AG-UI client materialises a `role: "reasoning"` message in
the conversation history. On the next user turn the converter throws
because it has no case for `reasoning`. The runtime catches the throw,
finalises the stream with `RUN_ERROR { code: INCOMPLETE_STREAM,
message: "message role is not supported." }`, and the user sees a red
toast — Turn 1 worked, Turn 2 is dead. `role: "activity"`
(ACTIVITY_MESSAGE_*) hits the same path with the same outcome.

Reasoning carries provider-specific encrypted state in `encryptedValue`
(OpenAI Responses API `encrypted_content`, Anthropic extended-thinking
`signature`) that providers use to maintain reasoning continuity across
turns. Simply dropping these messages would make the model forget its
prior chain-of-thought on the next turn.

Fix: forward each AG-UI reasoning message as a standalone AIMessage
whose content carries a `type: "reasoning"` block with the rendered
summary text and the encrypted state. langchain-openai's
`_construct_responses_api_input` already recognises these blocks and
threads them through to the Responses API as reasoning input items, so
the model sees its own prior reasoning state on subsequent turns.

Activity messages are display-only progress events with no LLM-relevant
content and no analogue in LangGraph; skip them rather than throwing.

OpenAI's `developer` role (newer-model replacement for `system`) was
also throwing; map it to a SystemMessage so demo agents that set
developer prompts work out of the box. Genuinely unknown roles still
throw.

Verified end-to-end against the CopilotKit `tool-rendering-reasoning-
chain` showcase (multi-turn weather Tokyo → SFO/JFK with
`init_chat_model("openai:gpt-X", use_responses_api=True,
reasoning={...})`):

- Before: Turn 1 ok, Turn 2 hangs with the role-not-supported toast.
- After: both turns complete, both render reasoning blocks + per-tool
  cards. LangGraph thread state on Turn 2 shows an `ai: [reasoning]`
  message between Turn 1 and Turn 2's user message, sourced from the
  AG-UI history — proving the agent receives its own prior reasoning
  state.
@tylerslaton tylerslaton force-pushed the fix/langgraph-skip-reasoning-activity-messages branch from a10bed5 to d02e3bb Compare May 10, 2026 21:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants