LangGraph Cloud

Getting Started

Connect to LangGraph Cloud API for agent workflows with streaming.

Requirements

You need a LangGraph Cloud API server. You can start a server locally via LangGraph Studio or use LangSmith for a hosted version.

The state of the graph you are using must have a messages key with a list of LangChain-alike messages.

New project from template

Create a new project based on the LangGraph assistant-ui template

npx create-assistant-ui@latest -t langgraph my-app

Set environment variables

Create a .env.local file in your project with the following variables:

# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id

Installation in existing React project

Install dependencies

npm install @assistant-ui/react @assistant-ui/react-langgraph @langchain/langgraph-sdk

Setup a proxy backend endpoint (optional, for production)

This example forwards every request to the LangGraph server directly from the browser. For production use-cases, you should limit the API calls to the subset of endpoints that you need and perform authorization checks.

@/app/api/[...path]/route.ts
import { ,  } from "next/server";

export const  = "edge";

function () {
  return {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Methods": "GET, POST, PUT, PATCH, DELETE, OPTIONS",
    "Access-Control-Allow-Headers": "*",
  };
}

async function (: , : string) {
  try {
    const  = ...(/^\/?api\//, "");
    const  = new (.);
    const  = new (.);
    .("_path");
    .("nxtP_path");
    const  = .()
      ? `?${.()}`
      : "";

    const : RequestInit = {
      ,
      : {
        "x-api-key": .["LANGCHAIN_API_KEY"] || "",
      },
      : .,
    };

    if (["POST", "PUT", "PATCH"].()) {
      . = await .();
    }

    const  = await (
      `${.["LANGGRAPH_API_URL"]}/${}${}`,
      ,
    );

    const  = new (.);
    .("content-encoding");
    .("content-length");
    .("transfer-encoding");
    const  = ();
    for (const [, ] of .()) {
      .(, );
    }

    return new (., {
      : .,
      : .,
      ,
    });
  } catch (: unknown) {
    if ( instanceof ) {
      const  =  as Error & { ?: number };
      return .(
        { : . },
        { : . ?? 500 },
      );
    }
    return .({ : "Unknown error" }, { : 500 });
  }
}

export const  = (: ) => (, "GET");
export const  = (: ) => (, "POST");
export const  = (: ) => (, "PUT");
export const  = (: ) => (, "PATCH");
export const  = (: ) => (, "DELETE");
export const  = () =>
  new (null, {
    : 204,
    : (),
  });

Setup helper functions

@/lib/chatApi.ts
import {  } from "@langchain/langgraph-sdk";

export const  = () => {
  const  =
    process.env["NEXT_PUBLIC_LANGGRAPH_API_URL"] ||
    (typeof  !== "undefined"
      ? new ("/api", ..).
      : "/api");
  return new ({  });
};

Define a MyAssistant component

@/components/MyAssistant.tsx
"use client";

import {  } from "react";
import {  } from "@/components/assistant-ui/thread";
import {  } from "@assistant-ui/react";
import {
  ,
  ,
  type ,
} from "@assistant-ui/react-langgraph";

import {  } from "@/lib/chatApi";

const  = process.env["NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID"]!;

export function () {
  const  = (() => (), []);
  const  = (
    () =>
      ({
        ,
        : ,
      }),
    [],
  );

  const  = ({
    : true,
    ,
    : async () => {
      const {  } = await ..();
      return { :  };
    },
    : async () => {
      const  = await ..<{
        : [];
      }>();
      return {
        : ..,
        : .[0]?.,
      };
    },
  });

  return (
    < ={}>
      < />
    </AssistantRuntimeProvider>
  );
}

Use the MyAssistant component

@/app/page.tsx
import {  } from "@/components/MyAssistant";

export default function () {
  return (
    < ="h-dvh">
      < />
    </>
  );
}

Setup environment variables

Create a .env.local file in your project with the following variables:

# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id

Setup UI components

Follow the UI Components guide to setup the UI components.

Advanced APIs

Message Accumulator

The LangGraphMessageAccumulator lets you append messages incoming from the server to replicate the messages state client side.

import {
  LangGraphMessageAccumulator,
  appendLangChainChunk,
} from "@assistant-ui/react-langgraph";

const accumulator = new LangGraphMessageAccumulator({
  appendMessage: appendLangChainChunk,
});

// Add new chunks from the server
if (event.event === "messages/partial") accumulator.addMessages(event.data);

Message Conversion

Use convertLangChainMessages to transform LangChain messages to assistant-ui format:

import { convertLangChainMessages } from "@assistant-ui/react-langgraph";

const threadMessage = convertLangChainMessages(langChainMessage);

Event Handlers

You can listen to streaming events by passing eventHandlers to useLangGraphRuntime:

const runtime = useLangGraphRuntime({
  stream: async (messages, { initialize, ...config }) => { /* ... */ },
  eventHandlers: {
    onMessageChunk: (chunk, metadata) => {
      // Fired for each chunk in messages-tuple mode.
      // `metadata` contains langgraph_step, langgraph_node, ls_model_name, etc.
      // For pipe-namespaced events emitted by subgraphs (e.g. `messages|tools:call_abc`),
      // `metadata.namespace` holds the suffix ("tools:call_abc"). Use it to attribute
      // a chunk to a specific subgraph.
    },
    onValues: (values) => {
      // Fired when a top-level `values` event is received.
      // Subgraph `values` events are routed to `onSubgraphValues` instead.
    },
    onUpdates: (updates) => {
      // Fired when a top-level `updates` event is received.
      // Subgraph `updates` events are routed to `onSubgraphUpdates` instead.
    },
    onSubgraphValues: (namespace, values) => {
      // Fired when a subgraph `values|<namespace>` event is received
      // (e.g. `namespace === "tools:call_abc"`). Use this to observe
      // subgraph-internal state without mixing it into `onValues`.
    },
    onSubgraphUpdates: (namespace, updates) => {
      // Fired when a subgraph `updates|<namespace>` event is received.
    },
    onMetadata: (metadata) => { /* thread metadata */ },
    onInfo: (info) => { /* informational messages */ },
    onError: (error) => {
      // Fired for both top-level and subgraph errors.
    },
    onSubgraphError: (namespace, error) => {
      // Additionally fired for subgraph errors with the namespace.
      // Use to attribute a subgraph failure to its source without marking
      // the parent message as incomplete (that only happens for top-level errors).
    },
    onCustomEvent: (type, data) => { /* custom events */ },
  },
});

Message Metadata

When using streamMode: "messages-tuple", each chunk includes metadata from the LangGraph server. Access accumulated metadata per message with the useLangGraphMessageMetadata hook:

import { useLangGraphMessageMetadata } from "@assistant-ui/react-langgraph";

function MyComponent() {
  const metadata = useLangGraphMessageMetadata();
  // Map<string, LangGraphTupleMetadata> keyed by message ID
}

Thread Management

Basic Thread Support

The useLangGraphRuntime hook now includes built-in thread management capabilities:

const runtime = useLangGraphRuntime({
  stream: async (messages, { initialize, ...config }) => {
    // initialize() creates or loads a thread and returns its IDs
    const { remoteId, externalId } = await initialize();
    // Use externalId (your backend's thread ID) for API calls
    return sendMessage({ threadId: externalId, messages, config });
  },
  create: async () => {
    // Called when creating a new thread
    const { thread_id } = await createThread();
    return { externalId: thread_id };
  },
  load: async (externalId) => {
    // Called when loading an existing thread
    const state = await getThreadState(externalId);
    return {
      messages: state.values.messages,
      interrupts: state.tasks[0]?.interrupts,
    };
  },
});

Cloud Persistence

For persistent thread history across sessions, integrate with assistant-cloud:

const runtime = useLangGraphRuntime({
  cloud: new AssistantCloud({
    baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL,
    anonymous: true,
  }),
  // ... stream, create, load functions
});

See the Cloud Persistence guide for detailed setup instructions.

Custom Thread List

To surface pre-existing LangGraph thread_ids in the thread picker without running assistant-cloud, pass a RemoteThreadListAdapter via unstable_threadListAdapter. A common implementation backs list() with client.threads.search() and initialize() with client.threads.create().

import type { RemoteThreadListAdapter } from "@assistant-ui/react";
import { Client } from "@langchain/langgraph-sdk";

const client = new Client({ apiUrl: process.env.NEXT_PUBLIC_LANGGRAPH_API_URL });

const threadListAdapter: RemoteThreadListAdapter = {
  async list() {
    const threads = await client.threads.search({ limit: 50 });
    return {
      threads: threads.map((t) => ({
        status: "regular",
        remoteId: t.thread_id,
        externalId: t.thread_id,
        title: (t.metadata as { title?: string } | undefined)?.title,
      })),
    };
  },
  async initialize() {
    const t = await client.threads.create();
    return { remoteId: t.thread_id, externalId: t.thread_id };
  },
  async delete(remoteId) {
    await client.threads.delete(remoteId);
  },
  // rename, archive, unarchive, fetch, generateTitle — see link below
};

const runtime = useLangGraphRuntime({
  stream: async function* (messages, { initialize }) { /* ... */ },
  load: async (externalId) => { /* ... */ },
  unstable_threadListAdapter: threadListAdapter,
});

Setting remoteId === externalId keeps the ids assistant-ui stores aligned with the LangGraph thread ids your load and stream callbacks receive. See the Custom Thread List guide for the full adapter contract.

When unstable_threadListAdapter is provided, the cloud, create, and delete options are ignored — the adapter owns the full thread-list lifecycle.

Message Editing & Regeneration

LangGraph uses server-side checkpoints for state management. To support message editing (branching) and regeneration, you need to provide a getCheckpointId callback that resolves the appropriate checkpoint for server-side forking.

const runtime = useLangGraphRuntime({
  stream: async (messages, { initialize, ...config }) => {
    const { externalId } = await initialize();
    if (!externalId) throw new Error("Thread not found");
    return sendMessage({ threadId: externalId, messages, config });
  },
  create: async () => {
    const { thread_id } = await createThread();
    return { externalId: thread_id };
  },
  load: async (externalId) => {
    const state = await getThreadState(externalId);
    return {
      messages: state.values.messages,
      interrupts: state.tasks[0]?.interrupts,
    };
  },
  getCheckpointId: async (threadId, parentMessages) => {
    const client = createClient();
    // Get the thread state history and find the checkpoint
    // that matches the parent messages by exact message ID sequence.
    // If IDs are missing, return null and skip edit/reload for safety.
    const history = await client.threads.getHistory(threadId);
    for (const state of history) {
      const stateMessages = state.values.messages;
      if (!stateMessages || stateMessages.length !== parentMessages.length) {
        continue;
      }

      const hasStableIds =
        parentMessages.every((message) => typeof message.id === "string") &&
        stateMessages.every((message) => typeof message.id === "string");
      if (!hasStableIds) {
        continue;
      }

      const isMatch = parentMessages.every(
        (message, index) => message.id === stateMessages[index]?.id,
      );

      if (isMatch) {
        return state.checkpoint.checkpoint_id ?? null;
      }
    }
    return null;
  },
});

When getCheckpointId is provided:

  • Edit buttons appear on user messages, allowing users to edit and resend from that point
  • Regenerate buttons appear on assistant messages, allowing users to regenerate the response

The resolved checkpointId is passed to your stream callback via config.checkpointId. Your sendMessage helper should map it to the LangGraph SDK's checkpoint_id parameter (see the helper function in the setup section above).

Without getCheckpointId, the edit and regenerate buttons will not appear. This is intentional — simply truncating client-side messages without forking from the correct server-side checkpoint would produce incorrect state.

Interrupt Persistence

LangGraph supports interrupting the execution flow to request user input or handle specific interactions. These interrupts can be persisted and restored when switching between threads:

  1. Make sure your thread state type includes the interrupts field
  2. Return the interrupts from the load function along with the messages
  3. The runtime will automatically restore the interrupt state when switching threads

This feature is particularly useful for applications that require user approval flows, multi-step forms, or any other interactive elements that might span multiple thread switches.

Generative UI (ui_message)

LangGraph's Generative UI lets your graph emit structured UI components alongside assistant messages via push_ui_message (Python) or typedUi().push() (TypeScript). The assistant-ui LangGraph adapter translates these into DataMessageParts on the associated assistant message, which you render with the existing makeAssistantDataUI API.

Enable the custom stream mode

UI messages are emitted through LangGraph's custom stream channel. Make sure your sendMessage helper includes "custom" in streamMode:

streamMode: ["messages", "updates", "custom"]

Alternatively, if your graph accumulates UI messages in state under the ui key (the default for typedUi), "values" also works — the adapter reads both paths.

Custom state key

If your graph uses a non-default stateKey with typedUi(config, { stateKey: "my_ui" }) on the server, pass the matching uiStateKey option to useLangGraphRuntime on the client:

const runtime = useLangGraphRuntime({
  stream: async function* (messages, { initialize }) { /* ... */ },
  uiStateKey: "my_ui",
});

This only affects the values stream path — the custom channel carries each UI event individually and doesn't rely on the state key.

Emit a UI message from your graph

Python
from langgraph.graph.ui import push_ui_message
from langchain_core.messages import AIMessage

async def chart_node(state, config):
    message = AIMessage(id="msg-1", content="Here's your chart.")
    push_ui_message(
        "chart",
        {"series": [1, 2, 3], "title": "Sales"},
        message=message,  # Links the UI to this AI message
    )
    return {"messages": [message]}
TypeScript
import { typedUi } from "@langchain/langgraph-sdk/react-ui/server";
import type { ComponentRegistry } from "./components";

export async function chartNode(state, config) {
  const ui = typedUi<ComponentRegistry>(config);
  const message = { id: "msg-1", type: "ai", content: "Here's your chart." };
  ui.push(
    { name: "chart", props: { series: [1, 2, 3], title: "Sales" } },
    { message },
  );
  return { messages: [message] };
}

Passing message (Python) or { message } (TypeScript) is what links the UI component to a specific assistant message — the adapter reads metadata.message_id to attach the generated DataMessagePart to the correct message in the thread.

Register a renderer on the client

@/components/ChartUI.tsx
import { makeAssistantDataUI } from "@assistant-ui/react";

type ChartProps = {
  series: number[];
  title: string;
};

export const ChartUI = makeAssistantDataUI<ChartProps>({
  name: "chart",
  render: ({ data }) => (
    <div>
      <h3>{data.title}</h3>
      <Chart series={data.series} />
    </div>
  ),
});

Mount the component once somewhere inside the AssistantRuntimeProvider tree. It renders nothing itself — it only registers the renderer:

@/components/MyAssistant.tsx
<AssistantRuntimeProvider runtime={runtime}>
  <ChartUI />
  <Thread />
</AssistantRuntimeProvider>

When a matching UI message arrives, the adapter appends a { type: "data", name: "chart", data: { series, title } } part to the parent assistant message and the registered component renders inline.

Register renderers via uiComponents

Instead of mounting separate makeAssistantDataUI components, you can register renderers directly on the runtime hook via the uiComponents option:

@/components/MyAssistant.tsx
const runtime = useLangGraphRuntime({
  stream: async function* (messages, { initialize }) { /* ... */ },
  uiComponents: {
    renderers: {
      chart: ({ data }) => <Chart series={data.series} title={data.title} />,
      table: ({ data }) => <DataTable rows={data.rows} />,
    },
  },
});

Static renderers are matched by ui_message name. If no match is found, the part renders nothing unless a fallback is provided.

Dynamic loading with fallback

LangSmith's Generative UI supports colocating UI code with your graph and loading it at runtime via LoadExternalComponent. The fallback option handles any ui_message name that has no static renderer:

@/components/MyAssistant.tsx
import { LoadExternalComponent } from "@langchain/langgraph-sdk/react-ui";

const runtime = useLangGraphRuntime({
  stream: async function* (messages, { initialize }) { /* ... */ },
  uiComponents: {
    fallback: ({ name, data }) => (
      <LoadExternalComponent name={name} props={data} />
    ),
    renderers: {
      chart: ({ data }) => <Chart {...data} />,
    },
  },
});

With this setup:

  • A ui_message with name: "chart" renders the static Chart component
  • Any other name (e.g. "dashboard", "form") is handled by fallback, which fetches the component from LangSmith at runtime

The fallback component receives the same props as any data renderer: name, data, and part state metadata. This lets you pass the component name and props straight through to LoadExternalComponent.

Semantics

The adapter mirrors the reducer in @langchain/langgraph-sdk/react-ui exactly:

  • UI messages are keyed by their own id. Pushing the same id again replaces the existing entry
  • Passing metadata: { merge: true } shallow-merges props onto the previous entry
  • Emitting { type: "remove-ui", id } (via delete_ui_message / ui.delete(id)) removes the entry
  • UI messages without metadata.message_id are held in the runtime but not injected into any message; use useLangGraphUIMessages() to access the raw list if needed

Restore persisted UI messages on thread switch

If your graph persists UI messages in state via typedUi, return them from the load callback so they're restored when the user switches threads or refreshes the page:

const runtime = useLangGraphRuntime({
  stream: async function* (messages, { initialize }) { /* ... */ },
  load: async (externalId) => {
    const state = await getThreadState(externalId);
    return {
      messages: state.values.messages,
      uiMessages: state.values.ui,
      interrupts: state.tasks[0]?.interrupts,
    };
  },
});

Without this, each reload starts with an empty UI list even though the messages themselves are loaded.

Escape hatch: useLangGraphUIMessages

import { useLangGraphUIMessages } from "@assistant-ui/react-langgraph";

function Sidebar() {
  const uiMessages = useLangGraphUIMessages();
  // Filter, group, or render UI messages outside the thread
  return <>{uiMessages.map(/* ... */)}</>;
}