# Getting Started
URL: /docs/runtimes/langgraph
Connect to LangGraph Cloud API for agent workflows with streaming.
Requirements \[#requirements]
You need a LangGraph Cloud API server. You can start a server locally via [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio) or use [LangSmith](https://www.langchain.com/langsmith) for a hosted version.
The state of the graph you are using must have a `messages` key with a list of LangChain-alike messages.
New project from template \[#new-project-from-template]
Create a new project based on the LangGraph assistant-ui template \[#create-a-new-project-based-on-the-langgraph-assistant-ui-template]
```sh
npx create-assistant-ui@latest -t langgraph my-app
```
Set environment variables \[#set-environment-variables]
Create a `.env.local` file in your project with the following variables:
```sh
# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
```
Installation in existing React project \[#installation-in-existing-react-project]
Install dependencies \[#install-dependencies]
Setup a proxy backend endpoint (optional, for production) \[#setup-a-proxy-backend-endpoint-optional-for-production]
This example forwards every request to the LangGraph server directly from the
browser. For production use-cases, you should limit the API calls to the
subset of endpoints that you need and perform authorization checks.
```tsx twoslash title="@/api/api/[...path]/route.ts"
import { NextRequest, NextResponse } from "next/server";
function getCorsHeaders() {
return {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, PATCH, DELETE, OPTIONS",
"Access-Control-Allow-Headers": "*",
};
}
async function handleRequest(req: NextRequest, method: string) {
try {
const path = req.nextUrl.pathname.replace(/^\/?api\//, "");
const url = new URL(req.url);
const searchParams = new URLSearchParams(url.search);
searchParams.delete("_path");
searchParams.delete("nxtP_path");
const queryString = searchParams.toString()
? `?${searchParams.toString()}`
: "";
const options: RequestInit = {
method,
headers: {
"x-api-key": process.env["LANGCHAIN_API_KEY"] || "",
},
};
if (["POST", "PUT", "PATCH"].includes(method)) {
options.body = await req.text();
}
const res = await fetch(
`${process.env["LANGGRAPH_API_URL"]}/${path}${queryString}`,
options,
);
return new NextResponse(res.body, {
status: res.status,
statusText: res.statusText,
headers: {
...res.headers,
...getCorsHeaders(),
},
});
} catch (e: any) {
return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
}
}
export const GET = (req: NextRequest) => handleRequest(req, "GET");
export const POST = (req: NextRequest) => handleRequest(req, "POST");
export const PUT = (req: NextRequest) => handleRequest(req, "PUT");
export const PATCH = (req: NextRequest) => handleRequest(req, "PATCH");
export const DELETE = (req: NextRequest) => handleRequest(req, "DELETE");
// Add a new OPTIONS handler
export const OPTIONS = () => {
return new NextResponse(null, {
status: 204,
headers: {
...getCorsHeaders(),
},
});
};
```
Setup helper functions \[#setup-helper-functions]
```tsx twoslash include chatApi title="@/lib/chatApi.ts"
// @filename: /lib/chatApi.ts
// ---cut---
import { Client } from "@langchain/langgraph-sdk";
import { LangChainMessage, LangGraphSendMessageConfig } from "@assistant-ui/react-langgraph";
const createClient = () => {
const apiUrl = process.env["NEXT_PUBLIC_LANGGRAPH_API_URL"] || "/api";
return new Client({
apiUrl,
});
};
export const createThread = async () => {
const client = createClient();
return client.threads.create();
};
export const getThreadState = async (
threadId: string,
): Promise> => {
const client = createClient();
return client.threads.getState(threadId);
};
export const sendMessage = async (params: {
threadId: string;
messages: LangChainMessage;
config?: LangGraphSendMessageConfig;
}) => {
const client = createClient();
return client.runs.stream(
params.threadId,
process.env["NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID"]!,
{
input: {
messages: params.messages,
},
streamMode: "messages",
...params.config
},
);
};
```
Define a MyAssistant component \[#define-a-myassistant-component]
```tsx twoslash include MyAssistant title="@/components/MyAssistant.tsx"
// @filename: /components/MyAssistant.tsx
// @include: chatApi
// ---cut---
"use client";
import { Thread } from "@/components/assistant-ui/thread";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useLangGraphRuntime } from "@assistant-ui/react-langgraph";
import { createThread, getThreadState, sendMessage } from "@/lib/chatApi";
export function MyAssistant() {
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, config }) => {
const { externalId } = await initialize();
if (!externalId) throw new Error("Thread not found");
return sendMessage({
threadId: externalId,
messages,
config
});
},
create: async () => {
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
const state = await getThreadState(externalId);
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
});
return (
);
}
```
Use the MyAssistant component \[#use-the-myassistant-component]
```tsx twoslash title="@/app/page.tsx" {1,6}
// @include: MyAssistant
// @filename: /app/page.tsx
// ---cut---
import { MyAssistant } from "@/components/MyAssistant";
export default function Home() {
return (
);
}
```
Setup environment variables \[#setup-environment-variables]
Create a `.env.local` file in your project with the following variables:
```sh
# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
```
Setup UI components \[#setup-ui-components]
Follow the [UI Components](/docs/ui/thread) guide to setup the UI components.
Advanced APIs \[#advanced-apis]
Message Accumulator \[#message-accumulator]
The `LangGraphMessageAccumulator` lets you append messages incoming from the server to replicate the messages state client side.
```typescript
import {
LangGraphMessageAccumulator,
appendLangChainChunk,
} from "@assistant-ui/react-langgraph";
const accumulator = new LangGraphMessageAccumulator({
appendMessage: appendLangChainChunk,
});
// Add new chunks from the server
if (event.event === "messages/partial") accumulator.addMessages(event.data);
```
Message Conversion \[#message-conversion]
Use `convertLangChainMessages` to transform LangChain messages to assistant-ui format:
```typescript
import { convertLangChainMessages } from "@assistant-ui/react-langgraph";
const threadMessage = convertLangChainMessages(langChainMessage);
```
Thread Management \[#thread-management]
Basic Thread Support \[#basic-thread-support]
The `useLangGraphRuntime` hook now includes built-in thread management capabilities:
```typescript
const runtime = useLangGraphRuntime({
stream: async (messages, { initialize, config }) => {
// initialize() creates or loads a thread and returns its IDs
const { remoteId, externalId } = await initialize();
// Use externalId (your backend's thread ID) for API calls
return sendMessage({ threadId: externalId, messages, config });
},
create: async () => {
// Called when creating a new thread
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
// Called when loading an existing thread
const state = await getThreadState(externalId);
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
});
```
Cloud Persistence \[#cloud-persistence]
For persistent thread history across sessions, integrate with assistant-cloud:
```typescript
const runtime = useLangGraphRuntime({
cloud: new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL,
}),
// ... stream, create, load functions
});
```
See the [Cloud Persistence guide](/docs/cloud/persistence/langgraph) for detailed setup instructions.
Interrupt Persistence \[#interrupt-persistence]
LangGraph supports interrupting the execution flow to request user input or handle specific interactions. These interrupts can be persisted and restored when switching between threads:
1. Make sure your thread state type includes the `interrupts` field
2. Return the interrupts from the `load` function along with the messages
3. The runtime will automatically restore the interrupt state when switching threads
This feature is particularly useful for applications that require user approval flows, multi-step forms, or any other interactive elements that might span multiple thread switches.