# Getting Started
URL: /docs/runtimes/langchain
Adapter for LangChain's `useStream` hook, exposed as an assistant-ui runtime.
`@assistant-ui/react-langchain` wraps [`useStream`](https://docs.langchain.com/oss/javascript/langgraph-sdk/react-stream) from `@langchain/react` and exposes it as an assistant-ui runtime. Use this package if you are already integrating your app with `@langchain/react` and want assistant-ui on top of the upstream hook.
assistant-ui ships two adapters for LangGraph backends:
* **[`@assistant-ui/react-langgraph`](/docs/runtimes/langgraph)** integrates with `@langchain/langgraph-sdk` directly and exposes features like subgraph events, UI messages, message metadata, and end-to-end cancellation.
* **`@assistant-ui/react-langchain`** (this page) wraps `@langchain/react`'s `useStream`. It is lighter-weight and stays aligned with upstream, but currently does not expose every `react-langgraph` feature.
See [the comparison doc](/docs/runtimes/langchain/comparison) for a feature gap table.
Requirements \[#requirements]
You need a LangGraph Cloud API server. You can start a server locally via [LangGraph Studio](https://github.com/langchain-ai/langgraph-studio) or use [LangSmith](https://www.langchain.com/langsmith) for a hosted version.
The state of the graph you are using must have a `messages` key with a list of LangChain-alike messages (or pass a custom `messagesKey`).
Installation \[#installation]
Install dependencies \[#install-dependencies]
Define a `MyAssistant` component \[#define-a-myassistant-component]
```tsx title="@/components/MyAssistant.tsx"
"use client";
import { Thread } from "@/components/assistant-ui/thread";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useStreamRuntime } from "@assistant-ui/react-langchain";
export function MyAssistant() {
const runtime = useStreamRuntime({
assistantId: process.env["NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID"]!,
apiUrl: process.env["NEXT_PUBLIC_LANGGRAPH_API_URL"],
});
return (
);
}
```
Use the `MyAssistant` component \[#use-the-myassistant-component]
```tsx title="@/app/page.tsx"
import { MyAssistant } from "@/components/MyAssistant";
export default function Home() {
return (
);
}
```
Set environment variables \[#set-environment-variables]
Create a `.env.local` file in your project with the following variables:
```sh
NEXT_PUBLIC_LANGGRAPH_API_URL=http://localhost:2024
NEXT_PUBLIC_LANGGRAPH_ASSISTANT_ID=your_graph_id
```
Set up UI components \[#set-up-ui-components]
Follow the [UI Components](/docs/ui/thread) guide to set up the UI components.
`useStreamRuntime` options \[#usestreamruntime-options]
`useStreamRuntime` accepts every option [`useStream`](https://docs.langchain.com/oss/javascript/langgraph-sdk/react-stream) from `@langchain/react` does, plus three assistant-ui-specific fields:
| Option | Type | Description |
| ------------- | -------------------------------------- | ------------------------------------------------------------ |
| `cloud` | `AssistantCloud` | Optional — persists threads via assistant-cloud. |
| `adapters` | `{ attachments?, speech?, feedback? }` | Optional — attachment, speech, and feedback adapters. |
| `messagesKey` | `string` | The state key that holds messages. Defaults to `"messages"`. |
Reading custom state keys \[#reading-custom-state-keys]
LangGraph agents often expose structured state beyond messages (plans, todos, scratch files, generative-UI artifacts). Read them directly with `useLangChainState`. It mirrors `useStream().values[key]` upstream and updates when the stream emits new state.
```tsx
import { useLangChainState } from "@assistant-ui/react-langchain";
type Todo = { id: string; title: string; done: boolean };
function TodoList() {
const todos = useLangChainState("todos", []);
return (
{todos.map((t) => (
{t.done ? "✓" : "○"} {t.title}
))}
);
}
```
Signatures:
```ts
useLangChainState(key: string): T | undefined;
useLangChainState(key: string, defaultValue: T): T;
```
This hook is especially useful with the [`deepagents`](https://docs.langchain.com/oss/python/deepagents) middleware, whose `write_todos` step updates `state.todos` alongside the tool-call stream. Reading the state key directly avoids reconstructing the list from partial tool-call args.
Added in v0.0.2 — see issue [#3862](https://github.com/assistant-ui/assistant-ui/issues/3862) for motivation.
Interrupts \[#interrupts]
LangGraph interrupts pause the graph and wait for client input. `useLangChainInterruptState` exposes the current interrupt; `useLangChainSubmit` resumes the graph with a raw state update.
```tsx
import {
useLangChainInterruptState,
useLangChainSubmit,
} from "@assistant-ui/react-langchain";
import { Command } from "@langchain/langgraph-sdk";
function InterruptPrompt() {
const interrupt = useLangChainInterruptState();
const submit = useLangChainSubmit();
if (!interrupt) return null;
return (
{JSON.stringify(interrupt.value, null, 2)}
);
}
```
Message conversion \[#message-conversion]
`convertLangChainBaseMessage` transforms a LangChain `BaseMessage` into an assistant-ui message. Use it when building a custom `ExternalStoreAdapter` that needs to consume LangChain messages outside of `useStreamRuntime`.
```ts
import { convertLangChainBaseMessage } from "@assistant-ui/react-langchain";
```
Cloud persistence \[#cloud-persistence]
Pass an `AssistantCloud` instance to persist threads across sessions. The runtime automatically wires thread list management and resumes state from the cloud.
```tsx
import { AssistantCloud } from "assistant-cloud";
import { useStreamRuntime } from "@assistant-ui/react-langchain";
const cloud = new AssistantCloud({ baseUrl: "/api/cloud" });
const runtime = useStreamRuntime({
cloud,
assistantId: "agent",
apiUrl: "http://localhost:2024",
});
```
Custom `messagesKey` \[#custom-messageskey]
If your graph stores messages under a non-default key, pass `messagesKey` so the runtime submits tool results and human turns to the correct state slot:
```tsx
const runtime = useStreamRuntime({
assistantId: "agent",
apiUrl: "http://localhost:2024",
messagesKey: "chat_messages",
});
```