logoassistant-ui
LangGraph Cloud

Getting Started

Requirements

You need a LangGraph Cloud API server. You can start a server locally via LangGraph Studio or use LangSmith for a hosted version.

The state of the graph you are using must have a messages key with a list of LangChain-alike messages.

New project from template

Create a new project based on the LangGraph assistant-ui template

npx create-assistant-ui@latest -t langgraph my-app

Set environment variables

Create a .env.local file in your project with the following variables:

# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id

Installation in existing React project

Install dependencies

npm install @assistant-ui/react @assistant-ui/react-ui @assistant-ui/react-langgraph @langchain/langgraph-sdk

Setup a proxy backend endpoint (optional, for production)

This example forwards every request to the LangGraph server directly from the browser. For production use-cases, you should limit the API calls to the subset of endpoints that you need and perform authorization checks.

@/api/api/[...path]/route.ts
import { ,  } from "next/server";

function () {
  return {
    "Access-Control-Allow-Origin": "*",
    "Access-Control-Allow-Methods": "GET, POST, PUT, PATCH, DELETE, OPTIONS",
    "Access-Control-Allow-Headers": "*",
  };
}

async function (: , : string) {
  try {
    const  = ...(/^\/?api\//, "");
    const  = new (.);
    const  = new (.);
    .("_path");
    .("nxtP_path");
    const  = .()
      ? `?${.()}`
      : "";

    const : RequestInit = {
      ,
      : {
        "x-api-key": .["LANGCHAIN_API_KEY"] || "",
      },
    };

    if (["POST", "PUT", "PATCH"].()) {
      . = await .();
    }

    const  = await (
      `${.["LANGGRAPH_API_URL"]}/${}${}`,
      ,
    );

    return new (., {
      : .,
      : .,
      : {
        ....,
        ...(),
      },
    });
  } catch (: any) {
    return .({ : .message }, { : .status ?? 500 });
  }
}

export const  = (: ) => (, "GET");
export const  = (: ) => (, "POST");
export const  = (: ) => (, "PUT");
export const  = (: ) => (, "PATCH");
export const  = (: ) => (, "DELETE");

// Add a new OPTIONS handler
export const  = () => {
  return new (null, {
    : 204,
    : {
      ...(),
    },
  });
};

Setup helper functions

@/lib/chatApi.ts
import {  } from "@langchain/langgraph-sdk";
import {  } from "@assistant-ui/react-langgraph";

const  = () => {
  const  = .["NEXT_PUBLIC_LANGGRAPH_API_URL"] || "/api";
  return new ({
    ,
  });
};

export const  = async () => {
  const  = ();
  return ..();
};

export const  = async (
  : string,
): <<{ : [] }>> => {
  const  = ();
  return ..();
};

export const  = async (: {
  : string;
  : ;
}) => {
  const  = ();
  return ..(
    .,
    .["NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID"]!,
    {
      : {
        : .,
      },
      : "messages",
    },
  );
};

Define a MyAssistant component

@/components/MyAssistant.tsx
"use client";

import {  } from "@/components/assistant-ui";
import {  } from "@assistant-ui/react";
import {  } from "@assistant-ui/react-langgraph";

import { , ,  } from "@/lib/chatApi";

export function () {
  const  = ({
    : async (, {  }) => {
      const {  } = await ();
      if (!) throw new ("Thread not found");
      return ({
        : ,
        ,
      });
    },
    : async () => {
      const {  } = await ();
      return { :  };
    },
    : async () => {
      const  = await ();
      return {
        : .values.messages,
        : .tasks[0]?.interrupts,
      };
    },
  });

  return (
    < ={}>
      < />
    </AssistantRuntimeProvider>
  );
}

Use the MyAssistant component

@/app/page.tsx
import {  } from "@/components/MyAssistant";

export default function () {
  return (
    < ="h-dvh">
      < />
    </>
  );
}

Setup environment variables

Create a .env.local file in your project with the following variables:

# LANGCHAIN_API_KEY=your_api_key # for production
# LANGGRAPH_API_URL=your_api_url # for production
NEXT_PUBLIC_LANGGRAPH_API_URL=your_api_url # for development (no api key required)
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id

Setup UI components

Follow the UI Components guide to setup the UI components.

Advanced APIs

Message Accumulator

The LangGraphMessageAccumulator lets you append messages incoming from the server to replicate the messages state client side.

import {
  LangGraphMessageAccumulator,
  appendLangChainChunk,
} from "@assistant-ui/react-langgraph";

const accumulator = new LangGraphMessageAccumulator({
  appendMessage: appendLangChainChunk,
});

// Add new chunks from the server
if (event.event === "messages/partial") accumulator.addMessages(event.data);

Message Conversion

Use convertLangChainMessages to transform LangChain messages to assistant-ui format:

import { convertLangChainMessages } from "@assistant-ui/react-langgraph";

const threadMessage = convertLangChainMessages(langChainMessage);

Thread Management

Basic Thread Support

The useLangGraphRuntime hook now includes built-in thread management capabilities:

const runtime = useLangGraphRuntime({
  stream: async (messages, { initialize }) => {
    // initialize() creates or loads a thread and returns its IDs
    const { remoteId, externalId } = await initialize();
    // Use externalId (your backend's thread ID) for API calls
    return sendMessage({ threadId: externalId, messages });
  },
  create: async () => {
    // Called when creating a new thread
    const { thread_id } = await createThread();
    return { externalId: thread_id };
  },
  load: async (externalId) => {
    // Called when loading an existing thread
    const state = await getThreadState(externalId);
    return {
      messages: state.values.messages,
      interrupts: state.tasks[0]?.interrupts,
    };
  },
});

Cloud Persistence

For persistent thread history across sessions, integrate with assistant-cloud:

const runtime = useLangGraphRuntime({
  cloud: new AssistantCloud({
    baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL,
  }),
  // ... stream, create, load functions
});

See the Cloud Persistence guide for detailed setup instructions.

Interrupt Persistence

LangGraph supports interrupting the execution flow to request user input or handle specific interactions. These interrupts can be persisted and restored when switching between threads:

  1. Make sure your thread state type includes the interrupts field
  2. Return the interrupts from the load function along with the messages
  3. The runtime will automatically restore the interrupt state when switching threads

This feature is particularly useful for applications that require user approval flows, multi-step forms, or any other interactive elements that might span multiple thread switches.