Getting Started
Requirements
You need a LangGraph Cloud API server. You can start a server locally via LangGraph Studio or use LangSmith for a hosted version.
The state of the graph you are using must have a messages
key with a list of LangChain-alike messages.
New project from template
Create a new project based on the LangGraph assistant-ui template
npx create-assistant-ui@latest -t langgraph my-app
Set environment variables
Create a .env.local
file in your project with the following variables:
# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
Installation in existing React project
Install dependencies
npm install @assistant-ui/react @assistant-ui/react-ui @assistant-ui/react-langgraph @langchain/langgraph-sdk
Setup a proxy backend endpoint (optional, for production)
This example forwards every request to the LangGraph server directly from the browser. For production use-cases, you should limit the API calls to the subset of endpoints that you need and perform authorization checks.
import { NextRequest, NextResponse } from "next/server";
export const runtime = "edge";
function getCorsHeaders() {
return {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, PUT, PATCH, DELETE, OPTIONS",
"Access-Control-Allow-Headers": "*",
};
}
async function handleRequest(req: NextRequest, method: string) {
try {
const path = req.nextUrl.pathname.replace(/^\/?api\//, "");
const url = new URL(req.url);
const searchParams = new URLSearchParams(url.search);
searchParams.delete("_path");
searchParams.delete("nxtP_path");
const queryString = searchParams.toString()
? `?${searchParams.toString()}`
: "";
const options: RequestInit = {
method,
headers: {
"x-api-key": process.env["LANGCHAIN_API_KEY"] || "",
},
};
if (["POST", "PUT", "PATCH"].includes(method)) {
options.body = await req.text();
}
const res = await fetch(
`${process.env["LANGGRAPH_API_URL"]}/${path}${queryString}`,
options,
);
return new NextResponse(res.body, {
status: res.status,
statusText: res.statusText,
headers: {
...res.headers,
...getCorsHeaders(),
},
});
} catch (e: any) {
return NextResponse.json({ error: e.message }, { status: e.status ?? 500 });
}
}
export const GET = (req: NextRequest) => handleRequest(req, "GET");
export const POST = (req: NextRequest) => handleRequest(req, "POST");
export const PUT = (req: NextRequest) => handleRequest(req, "PUT");
export const PATCH = (req: NextRequest) => handleRequest(req, "PATCH");
export const DELETE = (req: NextRequest) => handleRequest(req, "DELETE");
// Add a new OPTIONS handler
export const OPTIONS = () => {
return new NextResponse(null, {
status: 204,
headers: {
...getCorsHeaders(),
},
});
};
Setup helper functions
import { Client } from "@langchain/langgraph-sdk";
import { LangChainMessage } from "@assistant-ui/react-langgraph";
const createClient = () => {
const apiUrl = process.env["NEXT_PUBLIC_LANGGRAPH_API_URL"] || "/api";
return new Client({
apiUrl,
});
};
export const createThread = async () => {
const client = createClient();
return client.threads.create();
};
export const getThreadState = async (
threadId: string,
): Promise<ThreadState<{ messages: LangChainMessage[] }>> => {
const client = createClient();
return client.threads.getState(threadId);
};
export const sendMessage = async (params: {
threadId: string;
messages: LangChainMessage;
}) => {
const client = createClient();
return client.runs.stream(
params.threadId,
process.env["NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID"]!,
{
input: {
messages: params.messages,
},
streamMode: "messages",
},
);
};
Define a MyAssistant
component
"use client";
import { useRef } from "react";
import { Thread } from "@/components/assistant-ui";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useLangGraphRuntime } from "@assistant-ui/react-langgraph";
import { createThread, getThreadState, sendMessage } from "@/lib/chatApi";
export function MyAssistant() {
const threadIdRef = useRef<string | undefined>();
const runtime = useLangGraphRuntime({
threadId: threadIdRef.current,
stream: async (messages) => {
if (!threadIdRef.current) {
const { thread_id } = await createThread();
threadIdRef.current = thread_id;
}
const threadId = threadIdRef.current;
return sendMessage({
threadId,
messages,
});
},
onSwitchToNewThread: async () => {
const { thread_id } = await createThread();
threadIdRef.current = thread_id;
},
onSwitchToThread: async (threadId) => {
const state = await getThreadState(threadId);
threadIdRef.current = threadId;
return {
messages: state.values.messages,
interrupts: state.tasks[0]?.interrupts,
};
},
});
return (
<AssistantRuntimeProvider runtime={runtime}>
<Thread />
</AssistantRuntimeProvider>
);
}
Use the MyAssistant
component
import { MyAssistant } from "@/components/MyAssistant";
export default function Home() {
return (
<main className="h-dvh">
<MyAssistant />
</main>
);
}
Setup environment variables
Create a .env.local
file in your project with the following variables:
# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
Setup UI components
Follow the UI Components guide to setup the UI components.
Advanced APIs
Message Accumulator
The LangGraphMessageAccumulator
lets you append messages incoming from the server to replicate the messages state client side.
import {
LangGraphMessageAccumulator,
appendLangChainChunk,
} from "@assistant-ui/react-langgraph";
const accumulator = new LangGraphMessageAccumulator({
appendMessage: appendLangChainChunk,
});
// Add new chunks from the server
if (event.event === "messages/partial") accumulator.addMessages(event.data);
Message Conversion
Use convertLangChainMessages
to transform LangChain messages to assistant-ui format:
import { convertLangChainMessages } from "@assistant-ui/react-langgraph";
const threadMessage = convertLangChainMessages(langChainMessage);
Interrupt Persistence
LangGraph supports interrupting the execution flow to request user input or handle specific interactions. These interrupts can be persisted and restored when switching between threads. This means that if a user switches away from a thread during an interaction (like waiting for user approval), the interaction state will be preserved when they return to that thread.
To handle interrupts in your application:
- Make sure your thread state type includes the
interrupts
field - Return the interrupts from
onSwitchToThread
along with the messages - The runtime will automatically restore the interrupt state when switching threads
This feature is particularly useful for applications that require user approval flows, multi-step forms, or any other interactive elements that might span multiple thread switches.