# Getting Started
URL: /docs/runtimes/langgraph
Connect to LangGraph Cloud API for agent workflows with streaming.
Requirements \[#requirements]
You need a LangGraph Cloud API server. You can start a server locally via [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio) or use [LangSmith](https://www.langchain.com/langsmith) for a hosted version.
The state of the graph you are using must have a `messages` key with a list of LangChain-alike messages.
New project from template \[#new-project-from-template]
Create a new project based on the LangGraph assistant-ui template \[#create-a-new-project-based-on-the-langgraph-assistant-ui-template]
```sh
npx create-assistant-ui@latest -t langgraph my-app
```
Set environment variables \[#set-environment-variables]
Create a `.env.local` file in your project with the following variables:
```sh
# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
```
Installation in existing React project \[#installation-in-existing-react-project]
Install dependencies \[#install-dependencies]
Setup a proxy backend endpoint (optional, for production) \[#setup-a-proxy-backend-endpoint-optional-for-production]
This example forwards every request to the LangGraph server directly from the
browser. For production use-cases, you should limit the API calls to the
subset of endpoints that you need and perform authorization checks.
```tsx twoslash title="@/api/api/[...path]/route.ts"
import { NextRequest, NextResponse } from "next/server";
function getCorsHeaders() {
return {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, PATCH, DELETE, OPTIONS",
"Access-Control-Allow-Headers": "*",
};
}
async function handleRequest(req: NextRequest, method: string) {
try {
const path = req.nextUrl.pathname.replace(/^\/?api\//, "");
const url = new URL(req.url);
const searchParams = new URLSearchParams(url.search);
searchParams.delete("_path");
searchParams.delete("nxtP_path");
const queryString = searchParams.toString()
? `?${searchParams.toString()}`
: "";
const options: RequestInit = {
method,
headers: {
"x-api-key": process.env["LANGCHAIN_API_KEY"] || "",
},
};
if (["POST", "PUT", "PATCH"].includes(method)) {
options.body = await req.text();
}
const res = await fetch(
`${process.env["LANGGRAPH_API_URL"]}/${path}${queryString}`,
options,
);
return new NextResponse(res.body, {
status: res.status,
statusText: res.statusText,
headers: {
...res.headers,
...getCorsHeaders(),
},
});
} catch (e: any) {
return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
}
}
export const GET = (req: NextRequest) => handleRequest(req, "GET");
export const POST = (req: NextRequest) => handleRequest(req, "POST");
export const PUT = (req: NextRequest) => handleRequest(req, "PUT");
export const PATCH = (req: NextRequest) => handleRequest(req, "PATCH");
export const DELETE = (req: NextRequest) => handleRequest(req, "DELETE");
// Add a new OPTIONS handler
export const OPTIONS = () => {
return new NextResponse(null, {
status: 204,
headers: {
...getCorsHeaders(),
},
});
};
```
Setup helper functions \[#setup-helper-functions]
```tsx twoslash include chatApi title="@/lib/chatApi.ts"
// @filename: /lib/chatApi.ts
// ---cut---
import { Client } from "@langchain/langgraph-sdk";
import { LangChainMessage, LangGraphSendMessageConfig } from "@assistant-ui/react-langgraph";
const createClient = () => {
const apiUrl = process.env["NEXT_PUBLIC_LANGGRAPH_API_URL"] || "/api";
return new Client({
apiUrl,
});
};
export const createThread = async () => {
const client = createClient();
return client.threads.create();
};
export const getThreadState = async (
threadId: string,
): Promise> => {
const client = createClient();
return client.threads.getState(threadId);
};
export const sendMessage = async (params: {
threadId: string;
messages: LangChainMessage[];
config?: LangGraphSendMessageConfig;
}) => {
const client = createClient();
const { checkpointId, ...restConfig } = params.config ?? {};
return client.runs.stream(
params.threadId,
process.env["NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID"]!,
{
input: params.messages.length > 0
? { messages: params.messages }
: null,
streamMode: "messages-tuple",
...(checkpointId && { checkpoint_id: checkpointId }),
...restConfig,
},
);
};
```
Define a MyAssistant component \[#define-a-myassistant-component]
```tsx twoslash include MyAssistant title="@/components/MyAssistant.tsx"
// @filename: /components/MyAssistant.tsx
// @include: chatApi
// ---cut---
"use client";
import { Thread } from "@/components/assistant-ui/thread";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useLangGraphRuntime } from "@assistant-ui/react-langgraph";
import { createThread, getThreadState, sendMessage } from "@/lib/chatApi";
export function MyAssistant() {
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, ...config }) => {
const { externalId } = await initialize();
if (!externalId) throw new Error("Thread not found");
return sendMessage({
threadId: externalId,
messages,
config
});
},
create: async () => {
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
const state = await getThreadState(externalId);
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
});
return (
);
}
```
Use the MyAssistant component \[#use-the-myassistant-component]
```tsx twoslash title="@/app/page.tsx" {1,6}
// @include: MyAssistant
// @filename: /app/page.tsx
// ---cut---
import { MyAssistant } from "@/components/MyAssistant";
export default function Home() {
return (
);
}
```
Setup environment variables \[#setup-environment-variables]
Create a `.env.local` file in your project with the following variables:
```sh
# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
```
Setup UI components \[#setup-ui-components]
Follow the [UI Components](/docs/ui/thread) guide to setup the UI components.
Advanced APIs \[#advanced-apis]
Message Accumulator \[#message-accumulator]
The `LangGraphMessageAccumulator` lets you append messages incoming from the server to replicate the messages state client side.
```typescript
import {
LangGraphMessageAccumulator,
appendLangChainChunk,
} from "@assistant-ui/react-langgraph";
const accumulator = new LangGraphMessageAccumulator({
appendMessage: appendLangChainChunk,
});
// Add new chunks from the server
if (event.event === "messages/partial") accumulator.addMessages(event.data);
```
Message Conversion \[#message-conversion]
Use `convertLangChainMessages` to transform LangChain messages to assistant-ui format:
```typescript
import { convertLangChainMessages } from "@assistant-ui/react-langgraph";
const threadMessage = convertLangChainMessages(langChainMessage);
```
Event Handlers \[#event-handlers]
You can listen to streaming events by passing `eventHandlers` to `useLangGraphRuntime`:
```typescript
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, ...config }) => { /* ... */ },
eventHandlers: {
onMessageChunk: (chunk, metadata) => {
// Fired for each chunk in messages-tuple mode
// metadata contains langgraph_step, langgraph_node, ls_model_name, etc.
},
onValues: (values) => {
// Fired when a "values" event is received
},
onUpdates: (updates) => {
// Fired when an "updates" event is received
},
onMetadata: (metadata) => { /* thread metadata */ },
onError: (error) => { /* stream errors */ },
onCustomEvent: (type, data) => { /* custom events */ },
},
});
```
Message Metadata \[#message-metadata]
When using `streamMode: "messages-tuple"`, each chunk includes metadata from the LangGraph server. Access accumulated metadata per message with the `useLangGraphMessageMetadata` hook:
```typescript
import { useLangGraphMessageMetadata } from "@assistant-ui/react-langgraph";
function MyComponent() {
const metadata = useLangGraphMessageMetadata();
// Map keyed by message ID
}
```
Thread Management \[#thread-management]
Basic Thread Support \[#basic-thread-support]
The `useLangGraphRuntime` hook now includes built-in thread management capabilities:
```typescript
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, ...config }) => {
// initialize() creates or loads a thread and returns its IDs
const { remoteId, externalId } = await initialize();
// Use externalId (your backend's thread ID) for API calls
return sendMessage({ threadId: externalId, messages, config });
},
create: async () => {
// Called when creating a new thread
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
// Called when loading an existing thread
const state = await getThreadState(externalId);
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
});
```
Cloud Persistence \[#cloud-persistence]
For persistent thread history across sessions, integrate with assistant-cloud:
```typescript
const runtime = useLangGraphRuntime({
cloud: new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL,
}),
// ... stream, create, load functions
});
```
See the [Cloud Persistence guide](/docs/cloud/langgraph) for detailed setup instructions.
Message Editing & Regeneration \[#message-editing--regeneration]
LangGraph uses server-side checkpoints for state management. To support message editing (branching) and regeneration, you need to provide a `getCheckpointId` callback that resolves the appropriate checkpoint for server-side forking.
```typescript
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, ...config }) => {
const { externalId } = await initialize();
if (!externalId) throw new Error("Thread not found");
return sendMessage({ threadId: externalId, messages, config });
},
create: async () => {
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
const state = await getThreadState(externalId);
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
getCheckpointId: async (threadId, parentMessages) => {
const client = createClient();
// Get the thread state history and find the checkpoint
// that matches the parent messages by exact message ID sequence.
// If IDs are missing, return null and skip edit/reload for safety.
const history = await client.threads.getHistory(threadId);
for (const state of history) {
const stateMessages = state.values.messages;
if (!stateMessages || stateMessages.length !== parentMessages.length) {
continue;
}
const hasStableIds =
parentMessages.every((message) => typeof message.id === "string") &&
stateMessages.every((message) => typeof message.id === "string");
if (!hasStableIds) {
continue;
}
const isMatch = parentMessages.every(
(message, index) => message.id === stateMessages[index]?.id,
);
if (isMatch) {
return state.checkpoint.checkpoint_id ?? null;
}
}
return null;
},
});
```
When `getCheckpointId` is provided:
* **Edit buttons** appear on user messages, allowing users to edit and resend from that point
* **Regenerate buttons** appear on assistant messages, allowing users to regenerate the response
The resolved `checkpointId` is passed to your `stream` callback via `config.checkpointId`. Your `sendMessage` helper should map it to the LangGraph SDK's `checkpoint_id` parameter (see the helper function in the setup section above).
Without `getCheckpointId`, the edit and regenerate buttons will not appear. This is intentional — simply truncating client-side messages without forking from the correct server-side checkpoint would produce incorrect state.
Interrupt Persistence \[#interrupt-persistence]
LangGraph supports interrupting the execution flow to request user input or handle specific interactions. These interrupts can be persisted and restored when switching between threads:
1. Make sure your thread state type includes the `interrupts` field
2. Return the interrupts from the `load` function along with the messages
3. The runtime will automatically restore the interrupt state when switching threads
This feature is particularly useful for applications that require user approval flows, multi-step forms, or any other interactive elements that might span multiple thread switches.