# Architecture
URL: /docs/architecture
How components, runtimes, and cloud services fit together.
***
title: "Architecture"
description: How components, runtimes, and cloud services fit together.
-----------------------------------------------------------------------
import { Sparkles, PanelsTopLeft, Database, Terminal } from "lucide-react";
## assistant-ui is built on these main pillars:
Shadcn UI chat components with built-in state management
State management layer connecting UI to LLMs and backend services
Hosted service for thread persistence, history, and user management
### 1. Frontend components
Stylized and functional chat components built on top of Shadcn components that have context state management provided by the assistantUI runtime provider. These pre-built React components come with intelligent state management. [View our components](/docs/ui/thread)
### 2. Runtime
A React state management context for assistant chat. The runtime handles data conversions between the local state and calls to backends and LLMs. We offer different runtime solutions that work with various frameworks like Vercel AI SDK, LangGraph, LangChain, Helicone, local runtime, and an ExternalStore when you need full control of the frontend message state. [You can view the runtimes we support](/docs/runtimes/pick-a-runtime)
### 3. Assistant Cloud
A hosted service that enhances your assistant experience with comprehensive thread management and message history. Assistant Cloud stores complete message history, automatically persists threads, supports human-in-the-loop workflows, and integrates with common auth providers to seamlessly allow users to resume conversations at any point. [Cloud Docs](/docs/cloud/overview)
### There are three common ways to architect your assistant-ui application:
#### **1. Direct Integration with External Providers**
```mermaid
graph TD
A[Frontend Components] --> B[Runtime]
B --> D[External Providers or LLM APIs]
classDef default color:#f8fafc,text-align:center
style A fill:#e879f9,stroke:#2e1065,stroke-width:2px,color:#2e1065,font-weight:bold
style B fill:#93c5fd,stroke:#1e3a8a,stroke-width:2px,color:#1e3a8a,font-weight:bold
style D fill:#fca5a5,stroke:#7f1d1d,stroke-width:2px,color:#7f1d1d,font-weight:bold
class A,B,C,D,E default
```
#### **2. Using your own API endpoint**
```mermaid
graph TD
A[Frontend Components] --> B[Runtime]
B --> E[Your API Backend]
E --> D[External Providers or LLM APIs]
classDef default color:#f8fafc,text-align:center
style A fill:#e879f9,stroke:#2e1065,stroke-width:2px,color:#2e1065,font-weight:bold
style B fill:#93c5fd,stroke:#1e3a8a,stroke-width:2px,color:#1e3a8a,font-weight:bold
style D fill:#fca5a5,stroke:#7f1d1d,stroke-width:2px,color:#7f1d1d,font-weight:bold
style E fill:#fca5a5,stroke:#7f1d1d,stroke-width:2px,color:#7f1d1d,font-weight:bold
class A,B,C,D,E default
```
#### **3. With Assistant Cloud**
```mermaid
graph TD
A[Frontend Components] --> B[Runtime]
B --> C[Cloud]
E --> C
C --> D[External Providers or LLM APIs]
B --> E[Your API Backend]
classDef default color:#f8fafc,text-align:center
style A fill:#e879f9,stroke:#2e1065,stroke-width:2px,color:#2e1065,font-weight:bold
style B fill:#93c5fd,stroke:#1e3a8a,stroke-width:2px,color:#1e3a8a,font-weight:bold
style C fill:#86efac,stroke:#064e3b,stroke-width:2px,color:#064e3b,font-weight:bold
style D fill:#fca5a5,stroke:#7f1d1d,stroke-width:2px,color:#7f1d1d,font-weight:bold
style E fill:#fca5a5,stroke:#7f1d1d,stroke-width:2px,color:#7f1d1d,font-weight:bold
class A,B,C,D,E default
```
# CLI
URL: /docs/cli
Scaffold projects, add components, and manage updates from the command line.
***
title: CLI
description: Scaffold projects, add components, and manage updates from the command line.
-----------------------------------------------------------------------------------------
Use the `assistant-ui` CLI to quickly set up new projects and add components to existing ones.
## init
Use the `init` command to initialize configuration and dependencies for a new project.
The `init` command installs dependencies, adds components, and configures your project for assistant-ui.
```bash
npx assistant-ui@latest init
```
This will:
* Detect if you have an existing project with a `package.json`
* Use `shadcn add` to install the assistant-ui quick-start component
* Add the default assistant-ui components (thread, composer, etc.) to your project
* Configure TypeScript paths and imports
**When to use:**
* Adding assistant-ui to an **existing** Next.js project
* First-time setup in a project with `package.json`
**Options**
```bash
Usage: assistant-ui init [options]
initialize assistant-ui in a new or existing project
Options:
-c, --cwd the working directory. defaults to the current directory.
-h, --help display help for command
```
## create
Use the `create` command to scaffold a new Next.js project with assistant-ui pre-configured.
```bash
npx assistant-ui@latest create [project-directory]
```
This command uses `create-next-app` with assistant-ui starter templates.
**Available Templates**
| Template | Description | Command |
| ----------- | ------------------------------------ | -------------------------------------- |
| `default` | Basic setup with Vercel AI SDK | `npx assistant-ui create` |
| `cloud` | With Assistant Cloud for persistence | `npx assistant-ui create -t cloud` |
| `langgraph` | LangGraph integration | `npx assistant-ui create -t langgraph` |
| `mcp` | Model Context Protocol support | `npx assistant-ui create -t mcp` |
**Available Examples**
Use `--example` to create a project from one of the monorepo examples with full feature demonstrations:
| Example | Description | Command |
| -------------------------- | -------------------------------------- | ------------------------------------------------------------ |
| `with-ai-sdk-v6` | Vercel AI SDK v6 integration | `npx assistant-ui create my-app -e with-ai-sdk-v6` |
| `with-langgraph` | LangGraph agent with custom tools | `npx assistant-ui create my-app -e with-langgraph` |
| `with-cloud` | Assistant Cloud persistence | `npx assistant-ui create my-app -e with-cloud` |
| `with-ag-ui` | AG-UI protocol integration | `npx assistant-ui create my-app -e with-ag-ui` |
| `with-assistant-transport` | Custom backend via Assistant Transport | `npx assistant-ui create my-app -e with-assistant-transport` |
| `with-external-store` | External message store | `npx assistant-ui create my-app -e with-external-store` |
| `with-custom-thread-list` | Custom thread list UI | `npx assistant-ui create my-app -e with-custom-thread-list` |
| `with-react-hook-form` | React Hook Form integration | `npx assistant-ui create my-app -e with-react-hook-form` |
| `with-ffmpeg` | FFmpeg video processing tool | `npx assistant-ui create my-app -e with-ffmpeg` |
| `with-elevenlabs-scribe` | ElevenLabs voice transcription | `npx assistant-ui create my-app -e with-elevenlabs-scribe` |
| `with-parent-id-grouping` | Message part grouping | `npx assistant-ui create my-app -e with-parent-id-grouping` |
| `with-react-router` | React Router v7 integration | `npx assistant-ui create my-app -e with-react-router` |
| `with-tanstack` | TanStack Start integration | `npx assistant-ui create my-app -e with-tanstack` |
**Examples**
```bash
# Create with default template
npx assistant-ui@latest create my-app
# Create with cloud template
npx assistant-ui@latest create my-app -t cloud
# Create from an example
npx assistant-ui@latest create my-app --example with-langgraph
# Create with specific package manager
npx assistant-ui@latest create my-app --use-pnpm
# Skip package installation
npx assistant-ui@latest create my-app --skip-install
```
**Options**
```bash
Usage: assistant-ui create [project-directory] [options]
create a new project
Arguments:
project-directory name of the project directory
Options:
-t, --template template to use (default, cloud, langgraph, mcp)
-e, --example create from an example (e.g., with-langgraph)
--use-npm explicitly use npm
--use-pnpm explicitly use pnpm
--use-yarn explicitly use yarn
--use-bun explicitly use bun
--skip-install skip installing packages
-h, --help display help for command
```
## add
Use the `add` command to add individual components to your project.
```bash
npx assistant-ui@latest add [component]
```
The `add` command fetches components from the assistant-ui registry and adds them to your project. It automatically:
* Installs required dependencies
* Adds TypeScript types
* Configures imports
**Popular Components**
```bash
# Add the basic thread component
npx assistant-ui add thread
# Add thread list for multi-conversation support
npx assistant-ui add thread-list
# Add assistant modal
npx assistant-ui add assistant-modal
# Add multiple components at once
npx assistant-ui add thread thread-list assistant-sidebar
```
**Options**
```bash
Usage: assistant-ui add [options]
add a component to your project
Arguments:
components the components to add
Options:
-y, --yes skip confirmation prompt. (default: true)
-o, --overwrite overwrite existing files. (default: false)
-c, --cwd the working directory. defaults to the current directory.
-p, --path the path to add the component to.
-h, --help display help for command
```
## update
Use the `update` command to update all `@assistant-ui/*` packages to their latest versions.
```bash
npx assistant-ui@latest update
```
This command:
* Scans your `package.json` for assistant-ui packages
* Updates them to the latest versions using your package manager
* Preserves other dependencies
**Examples**
```bash
# Update all assistant-ui packages
npx assistant-ui update
# Dry run to see what would be updated
npx assistant-ui update --dry
```
**Options**
```bash
Usage: assistant-ui update [options]
update all '@assistant-ui/*' packages to latest versions
Options:
--dry print the command instead of running it
-c, --cwd the working directory. defaults to the current directory.
-h, --help display help for command
```
## upgrade
Use the `upgrade` command to automatically migrate your codebase when there are breaking changes.
```bash
npx assistant-ui@latest upgrade
```
This command:
* Runs codemods to transform your code
* Updates import paths and API usage
* Detects required dependency changes
* Prompts to install new packages
**What it does:**
* Applies all available codemods sequentially
* Shows progress bar with file count
* Reports any transformation errors
* Automatically detects and offers to install new dependencies
**Example output:**
```bash
Starting upgrade...
Found 24 files to process.
Progress |████████████████████| 100% | ETA: 0s || Running v0-11/content-part-to-message-part...
Checking for package dependencies...
✅ Upgrade complete!
```
**Options**
```bash
Usage: assistant-ui upgrade [options]
upgrade and apply codemods for breaking changes
Options:
-d, --dry dry run (no changes are made to files)
-p, --print print transformed files to stdout
--verbose show more information about the transform process
-j, --jscodeshift pass options directly to jscodeshift
-h, --help display help for command
```
## codemod
Use the `codemod` command to run a specific codemod transformation.
```bash
npx assistant-ui@latest codemod
```
This is useful when you want to run a specific migration rather than all available upgrades.
**Examples**
```bash
# Run specific codemod on a directory
npx assistant-ui codemod v0-11/content-part-to-message-part ./src
# Run with dry run to preview changes
npx assistant-ui codemod v0-11/content-part-to-message-part ./src --dry
# Print transformed output
npx assistant-ui codemod v0-11/content-part-to-message-part ./src --print
```
**Options**
```bash
Usage: assistant-ui codemod [options]
run a specific codemod transformation
Arguments:
codemod codemod to run
source path to source files or directory to transform
Options:
-d, --dry dry run (no changes are made to files)
-p, --print print transformed files to stdout
--verbose show more information about the transform process
-j, --jscodeshift pass options directly to jscodeshift
-h, --help display help for command
```
## mcp
Use the `mcp` command to install the assistant-ui MCP docs server for your IDE.
```bash
npx assistant-ui@latest mcp
```
This command configures the [Model Context Protocol](/docs/llm#mcp) server, giving your AI assistant direct access to assistant-ui documentation.
**Examples**
```bash
# Interactive - prompts to select IDE
npx assistant-ui mcp
# Install for specific IDE
npx assistant-ui mcp --cursor
npx assistant-ui mcp --windsurf
npx assistant-ui mcp --vscode
npx assistant-ui mcp --zed
npx assistant-ui mcp --claude-code
npx assistant-ui mcp --claude-desktop
```
**Options**
```bash
Usage: assistant-ui mcp [options]
install assistant-ui MCP docs server for your IDE
Options:
--cursor install for Cursor
--windsurf install for Windsurf
--vscode install for VSCode
--zed install for Zed
--claude-code install for Claude Code
--claude-desktop install for Claude Desktop
-h, --help display help for command
```
## Common Workflows
### Starting a new project
```bash
# Create a new project with the default template
npx assistant-ui@latest create my-chatbot
# Navigate into the directory
cd my-chatbot
# Start development
npm run dev
```
### Adding to existing project
```bash
# Initialize assistant-ui
npx assistant-ui@latest init
# Add additional components
npx assistant-ui@latest add thread-list assistant-modal
# Start development
npm run dev
```
### Keeping up to date
```bash
# Check for updates (dry run)
npx assistant-ui@latest update --dry
# Update all packages
npx assistant-ui@latest update
# Run upgrade codemods if needed
npx assistant-ui@latest upgrade
```
### Migrating versions
```bash
# Run automated migration
npx assistant-ui@latest upgrade
# Or run specific codemod
npx assistant-ui@latest codemod v0-11/content-part-to-message-part ./src
# Update packages after migration
npx assistant-ui@latest update
```
## Component Registry
The CLI pulls components from our public registry at [r.assistant-ui.com](https://r.assistant-ui.com).
Each component includes:
* Full TypeScript source code
* All required dependencies
* Tailwind CSS configuration
* Usage examples
Components are added directly to your project's source code, giving you full control to customize them.
## Troubleshooting
### Command not found
If you get a "command not found" error, make sure you're using `npx`:
```bash
npx assistant-ui@latest init
```
### Permission errors
On Linux/macOS, if you encounter permission errors:
```bash
sudo npx assistant-ui@latest init
```
Or fix npm permissions: [https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally](https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally)
### Conflicting dependencies
If you see dependency conflicts:
```bash
# Try with --force flag
npm install --force
# Or use legacy peer deps
npm install --legacy-peer-deps
```
### Component already exists
Use the `--overwrite` flag to replace existing components:
```bash
npx assistant-ui@latest add thread --overwrite
```
## Configuration
The CLI respects your project's configuration:
* **Package Manager**: Automatically detects npm, pnpm, yarn, or bun
* **TypeScript**: Works with your `tsconfig.json` paths
* **Tailwind**: Uses your `tailwind.config.js` settings
* **Import Aliases**: Respects `components.json` or `assistant-ui.json` configuration
# DevTools
URL: /docs/devtools
Inspect runtime state, context, and events in the browser.
***
title: DevTools
description: Inspect runtime state, context, and events in the browser.
-----------------------------------------------------------------------
import { InstallCommand } from "@/components/docs/fumadocs/install/install-command";
Hey, the assistant-ui DevTools allows you to debug the assistant-ui state and context, and events without resorting to `console.log`. It's an easy way to see how data flows to the assistant-ui's runtime layer.

## Setup
### Install the DevTools package
### Mount the DevTools modal
```tsx
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { DevToolsModal } from "@assistant-ui/react-devtools";
export function AssistantApp() {
return (
{/* ...your assistant-ui... */}
);
}
```
### Verify the DevTools overlay
That's it! In development builds you should now see the DevTools in the lower-right corner of your site.

# Introduction
URL: /docs
Beautiful, enterprise-grade AI chat interfaces for React applications.
***
title: Introduction
description: Beautiful, enterprise-grade AI chat interfaces for React applications.
-----------------------------------------------------------------------------------
import { Sparkles, PanelsTopLeft, Database, Terminal, Bot } from "lucide-react";
assistant-ui helps you create beautiful, enterprise-grade AI chat interfaces in minutes. Whether you're building a ChatGPT clone, a customer support chatbot, an AI assistant, or a complex multi-agent application, assistant-ui provides the frontend primitive components and state management layers to focus on what makes your application unique.
## Key Features
} title="Instant Chat UI">
Pre-built beautiful, customizable chat interfaces out of the box. Easy to quickly iterate on your idea.
} title="Chat State Management">
Powerful state management for chat interactions, optimized for streaming responses and efficient rendering.
} title="High Performance">
Optimized for speed and efficiency with minimal bundle size, ensuring your AI chat interfaces remain responsive.
} title="Framework Agnostic">
Easily integrate with any backend system, whether using Vercel AI SDK, direct LLM connections, or custom solutions. Works with any React-based framework.
## Quick Try
The fastest way to get started:
```sh
npx assistant-ui@latest create
```
This creates a new project with everything configured. Or choose a template:
```sh
# Assistant Cloud - with persistence and thread management
npx assistant-ui@latest create -t cloud
# LangGraph
npx assistant-ui@latest create -t langgraph
# MCP support
npx assistant-ui@latest create -t mcp
```
## What's Next?
# Installation
URL: /docs/installation
Get assistant-ui running in 5 minutes with npm and your first chat component.
***
title: Installation
description: Get assistant-ui running in 5 minutes with npm and your first chat component.
------------------------------------------------------------------------------------------
import { InstallCommand } from "@/components/docs/fumadocs/install/install-command";
## Quick Start
The fastest way to get started with assistant-ui.

### Initialize assistant-ui
**Create a new project:**
```sh
npx assistant-ui@latest create
```
Or choose a template:
```sh
# Assistant Cloud - with persistence and thread management
npx assistant-ui@latest create -t cloud
# LangGraph
npx assistant-ui@latest create -t langgraph
# MCP support
npx assistant-ui@latest create -t mcp
```
**Add to an existing project:**
```sh
npx assistant-ui@latest init
```
### Add API key
Create a `.env` file with your API key:
```
OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
### Start the app
```sh
npm run dev
```
## Manual Setup
If you prefer not to use the CLI, you can install components manually.
### Add assistant-ui
### Setup Backend Endpoint
Install provider SDK:
Add an API endpoint:
```ts title="/app/api/chat/route.ts" tab="OpenAI"
import { openai } from "@ai-sdk/openai";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o-mini"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="Anthropic"
import { anthropic } from "@ai-sdk/anthropic";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: anthropic("claude-3-5-sonnet-20240620"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="Azure"
import { azure } from "@ai-sdk/azure";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: azure("your-deployment-name"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="AWS"
import { bedrock } from "@ai-sdk/amazon-bedrock";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: bedrock("anthropic.claude-3-5-sonnet-20240620-v1:0"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="Gemini"
import { google } from "@ai-sdk/google";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: google("gemini-2.0-flash"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="GCP"
import { vertex } from "@ai-sdk/google-vertex";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: vertex("gemini-1.5-pro"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="Groq"
import { createOpenAI } from "@ai-sdk/openai";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
const groq = createOpenAI({
apiKey: process.env.GROQ_API_KEY ?? "",
baseURL: "https://api.groq.com/openai/v1",
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: groq("llama3-70b-8192"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="Fireworks"
import { createOpenAI } from "@ai-sdk/openai";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
const fireworks = createOpenAI({
apiKey: process.env.FIREWORKS_API_KEY ?? "",
baseURL: "https://api.fireworks.ai/inference/v1",
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: fireworks("accounts/fireworks/models/firefunction-v2"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="Cohere"
import { cohere } from "@ai-sdk/cohere";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: cohere("command-r-plus"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="Ollama"
import { ollama } from "ollama-ai-provider-v2";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: ollama("llama3"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts" tab="Chrome AI"
import { chromeai } from "chrome-ai";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: chromeai(),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
Define environment variables:
```sh title="/.env.local" tab="OpenAI"
OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local" tab="Anthropic"
ANTHROPIC_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local" tab="Azure"
AZURE_RESOURCE_NAME="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
AZURE_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local" tab="AWS"
AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
AWS_REGION="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local" tab="Gemini"
GOOGLE_GENERATIVE_AI_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local" tab="GCP"
GOOGLE_VERTEX_PROJECT="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
GOOGLE_VERTEX_LOCATION="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
GOOGLE_APPLICATION_CREDENTIALS="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local" tab="Groq"
GROQ_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local" tab="Fireworks"
FIREWORKS_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local" tab="Cohere"
COHERE_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh tab="Ollama"
```
```sh tab="Chrome AI"
```
If you aren't using Next.js, you can also deploy this endpoint to Cloudflare Workers, or any other serverless platform.
### Use it in your app
```tsx title="/app/page.tsx" tab="Thread"
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useChatRuntime, AssistantChatTransport } from "@assistant-ui/react-ai-sdk";
import { ThreadList } from "@/components/assistant-ui/thread-list";
import { Thread } from "@/components/assistant-ui/thread";
const MyApp = () => {
const runtime = useChatRuntime({
transport: new AssistantChatTransport({
api: "/api/chat",
}),
});
return (
);
};
```
```tsx title="/app/page.tsx" tab="AssistantModal"
// run `npx shadcn@latest add https://r.assistant-ui.com/assistant-modal.json`
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useChatRuntime, AssistantChatTransport } from "@assistant-ui/react-ai-sdk";
import { AssistantModal } from "@/components/assistant-ui/assistant-modal";
const MyApp = () => {
const runtime = useChatRuntime({
transport: new AssistantChatTransport({
api: "/api/chat",
}),
});
return (
);
};
```
## What's Next?
# AI-Assisted Development
URL: /docs/llm
Use AI tools to build with assistant-ui faster. AI-accessible documentation, Claude Code skills, and MCP integration.
***
title: "AI-Assisted Development"
description: Use AI tools to build with assistant-ui faster. AI-accessible documentation, Claude Code skills, and MCP integration.
----------------------------------------------------------------------------------------------------------------------------------
import { FileText } from "lucide-react";
Build faster with AI assistants that understand assistant-ui. This page covers all the ways to give your AI tools access to assistant-ui documentation and context.
## AI Accessible Documentation
Our docs are designed to be easily accessible to AI assistants:
} title="/llms.txt" href="/llms.txt" external>
Structured index of all documentation pages. Point your AI here for a quick overview.
} title="/llms-full.txt" href="/llms-full.txt" external>
Complete documentation in a single file. Use this for full context.
} title=".mdx suffix">
Add `.mdx` to any page's URL to get raw markdown content (e.g., `/docs/installation.mdx`).
### Context Files
Add assistant-ui context to your project's `CLAUDE.md` or `.cursorrules`:
```md
## assistant-ui
This project uses assistant-ui for chat interfaces.
Documentation: https://www.assistant-ui.com/llms-full.txt
Key patterns:
- Use AssistantRuntimeProvider at the app root
- Thread component for full chat interface
- AssistantModal for floating chat widget
- useChatRuntime hook with AI SDK transport
```
## Skills
Install assistant-ui skills for AI Tools:
```sh
npx skills add assistant-ui/skills
```
| Skill | Purpose |
| --------------- | -------------------------------------------------------------------- |
| `/assistant-ui` | General architecture and overview guide |
| `/setup` | Project setup and configuration (AI SDK, LangGraph, custom backends) |
| `/primitives` | UI component primitives (Thread, Composer, Message, etc.) |
| `/runtime` | Runtime system and state management |
| `/tools` | Tool registration and tool UI |
| `/streaming` | Streaming protocol with assistant-stream |
| `/cloud` | Cloud persistence and authorization |
| `/thread-list` | Multi-thread management |
| `/update` | Update assistant-ui and AI SDK to latest versions |
Use by typing the command in Claude Code, e.g., `/assistant-ui` for the main guide or `/setup` when setting up a project.
## MCP
`@assistant-ui/mcp-docs-server` provides direct access to assistant-ui documentation and examples in your IDE via the Model Context Protocol.
Once installed, your AI assistant will understand everything about assistant-ui - just ask naturally:
* "Add a chat interface with streaming support to my app"
* "How do I integrate assistant-ui with the Vercel AI SDK?"
* "My Thread component isn't updating, what could be wrong?"
### Quick Install (CLI)
```bash
npx assistant-ui mcp
```
Or specify your IDE directly:
```bash
npx assistant-ui mcp --cursor
npx assistant-ui mcp --windsurf
npx assistant-ui mcp --vscode
npx assistant-ui mcp --zed
npx assistant-ui mcp --claude-code
npx assistant-ui mcp --claude-desktop
```
### Manual Installation
Or add to `.cursor/mcp.json`:
```json
{
"mcpServers": {
"assistant-ui": {
"command": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"]
}
}
}
```
After adding, open Cursor Settings → MCP → find "assistant-ui" and click enable.
Add to `~/.codeium/windsurf/mcp_config.json`:
```json
{
"mcpServers": {
"assistant-ui": {
"command": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"]
}
}
}
```
After adding, fully quit and re-open Windsurf.
Add to `.vscode/mcp.json` in your project:
```json
{
"servers": {
"assistant-ui": {
"command": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"],
"type": "stdio"
}
}
}
```
Enable MCP in Settings → search "MCP" → enable "Chat > MCP". Use GitHub Copilot Chat in Agent mode.
Add to your Zed settings file:
* macOS: `~/.zed/settings.json`
* Linux: `~/.config/zed/settings.json`
* Windows: `%APPDATA%\Zed\settings.json`
Or open via `Cmd/Ctrl + ,` → "Open JSON Settings"
```json
{
"context_servers": {
"assistant-ui": {
"command": {
"path": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"]
}
}
}
}
```
The server starts automatically with the Assistant Panel.
```bash
claude mcp add assistant-ui -- npx -y @assistant-ui/mcp-docs-server
```
The server starts automatically once added.
Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
```json
{
"mcpServers": {
"assistant-ui": {
"command": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"]
}
}
}
```
Restart Claude Desktop after updating the configuration.
### Available Tools
| Tool | Description |
| --------------------- | --------------------------------------------------------------------------------------- |
| `assistantUIDocs` | Access documentation: getting started, component APIs, runtime docs, integration guides |
| `assistantUIExamples` | Browse code examples: AI SDK, LangGraph, OpenAI Assistants, tool UI patterns |
### Troubleshooting
* **Server not starting**: Ensure `npx` is installed and working. Check configuration file syntax.
* **Tool calls failing**: Restart the MCP server and/or your IDE. Update to latest IDE version.
# Using old React versions
URL: /docs/react-compatibility
Compatibility notes for React 18, 17, and 16.
***
title: Using old React versions
description: Compatibility notes for React 18, 17, and 16.
----------------------------------------------------------
import { InstallCommand } from "@/components/docs/fumadocs/install/install-command";
Older React versions are not continuously tested. If you encounter any issues
with integration, please contact us for help by joining our
[Discord](https://discord.gg/S9dwgCNEFs).
This guide provides instructions for configuring assistant-ui to work with React 18 or older versions.
## React 18
If you're using React 18, you need to update the shadcn/ui components to work with `forwardRef`. Specifically, you need to modify the Button component.
### Updating the Button Component
Navigate to your button component file (typically `/components/ui/button.tsx`) and wrap the Button component with `forwardRef`:
```tsx
// Before
function Button({
className,
variant,
size,
asChild = false,
...props
}: React.ComponentProps<"button"> &
VariantProps & {
asChild?: boolean;
}) {
const Comp = asChild ? Slot : "button";
return (
);
}
// After
const Button = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<"button"> &
VariantProps & {
asChild?: boolean;
}
>(({ className, variant, size, asChild = false, ...props }, ref) => {
const Comp = asChild ? Slot : "button";
return (
);
});
Button.displayName = "Button";
```
**Note:** If you're using a lower React version (17 or 16), you'll also need to follow the instructions for that version.
## React 17
For React 17 compatibility, in addition to the modifications outlined for React 18, you must also include a polyfill for the `useSyncExternalStore` hook (utilized by zustand).
### Patching Zustand with patch-package
Since the assistant-ui uses zustand internally, which depends on `useSyncExternalStore`, you'll need to patch the zustand package directly:
1. Install the required packages:
2. Add a postinstall script to your package.json:
```json
{
"scripts": {
"postinstall": "patch-package"
}
}
```
3. You'll want to follow the instructions in [patch-package](https://github.com/ds300/patch-package), by first making changes to the files of a particular package in your node\_modules folder, then running either `yarn patch-package package-name` or `npx patch-package package-name`. You'll need a patch for zustand - within `node_modules/zustand`, open `zustand/react.js` and make the following code changes:
````diff
diff --git a/node_modules/zustand/react.js b/node_modules/zustand/react.js
index 7599cfb..64530a8 100644
--- a/node_modules/zustand/react.js
+++ b/node_modules/zustand/react.js
@@ -1,6 +1,6 @@
'use strict';
-var React = require('react');
+var React = require('use-sync-external-store/shim');
var vanilla = require('zustand/vanilla');
const identity = (arg) => arg;
@@ -10,7 +10,7 @@ function useStore(api, selector = identity) {
() => selector(api.getState()),
() => selector(api.getInitialState())
);
- React.useDebugValue(slice);
+ //React.useDebugValue(slice);
return slice;
}
const createImpl = (createState) => {
This patch replaces the React import in zustand with the polyfill from `use-sync-external-store/shim` and comments out the `useDebugValue` call which isn't needed.
You should then run the patch-package command `yarn patch-package zustand` or `npx patch-package zustand` which should create a `patches` folder with a zustand patch file similar looking to this:
```diff
diff --git a/node_modules/zustand/react.js b/node_modules/zustand/react.js
index 7599cfb..64530a8 100644
--- a/node_modules/zustand/react.js
+++ b/node_modules/zustand/react.js
@@ -1,6 +1,6 @@
'use strict';
-var React = require('react');
+var React = require('use-sync-external-store/shim');
var vanilla = require('zustand/vanilla');
const identity = (arg) => arg;
@@ -10,7 +10,7 @@ function useStore(api, selector = identity) {
() => selector(api.getState()),
() => selector(api.getInitialState())
);
- React.useDebugValue(slice);
+ //React.useDebugValue(slice);
return slice;
}
const createImpl = (createState) => {
````
4. You may also need to apply the same patches within `node_modules/@assistant-ui/react/` and possibly a nested dependency patch for `node_modules/@assistant-ui/react/node_modules/zustand`. Look for instances of `React.useSyncExternalStore` and replace with `{ useSyncExternalStore } from "use-sync-external-store/shim";` and comment out any `useDebugValue` calls. Finally, you may need to patch `useId` from React, so within `node_modules/@assistant-ui/react/dist/runtimes/remote-thread-list/RemoteThreadListThreadListRuntimeCore.js`, change the following:
```diff
-import { Fragment, useEffect, useId } from "react";
+import { Fragment, useEffect, useRef } from "react";
import { create } from "zustand";
import { AssistantMessageStream } from "assistant-stream";
import { RuntimeAdapterProvider } from "../adapters/RuntimeAdapterProvider.js";
import { jsx } from "react/jsx-runtime";
+
+// PATCH-PACKAGE: Polyfill for useId if not available in React 16
+let useId;
+try {
+ // Try to use React's useId if available
+ useId = require("react").useId;
+} catch (e) {}
+if (!useId) {
+ // Fallback polyfill
+ let globalId = 0;
+ useId = function() {
+ const idRef = useRef();
+ if (!idRef.current) {
+ globalId++;
+ idRef.current = `uid-${globalId}`;
+ }
+ return idRef.current;
+ };
+}
```
5. Run the postinstall script to apply the patches:
```bash
npm run postinstall
# or
yarn postinstall
```
This patch replaces the React import in zustand with the polyfill from `use-sync-external-store/shim` and comments out the `useDebugValue` call which isn't needed.
**Note:** If you're using React 16, you'll also need to follow the instructions for that version.
## React 16
This section is incomplete and contributions are welcome. If you're using
React 16 and have successfully integrated assistant-ui, please consider
contributing to this documentation.
For React 16 compatibility, you need to apply all the steps for **React 18** and **React 17** above.
## Additional Resources
If you encounter any issues with React compatibility, please:
1. Check that all required dependencies are installed
2. Ensure your component modifications are correctly implemented
3. Join our [Discord](https://discord.gg/S9dwgCNEFs) community for direct support
# User Authorization
URL: /docs/cloud/authorization
Configure workspace auth tokens and integrate with auth providers.
***
title: User Authorization
description: Configure workspace auth tokens and integrate with auth providers.
-------------------------------------------------------------------------------
The assistant-ui API can be directly accessed by your frontend. This elliminates the need for a backend server from your side, except for authorization of your users.
This document explains how you can setup your server to authorize users to access the assistant-ui API.
## Workspaces
Authorization is granted to a workspace. Depending on the structure of your app, you might want to use user\_ids as the workspace\_id, or you might want to use a more complex structure.
For example, if your app supports multiple "projects", you might want to use the project\_id + user\_id as the workspace id (thread history scoped to user+project pairs).
## Workspace Auth Tokens
assistant-ui issues workspace auth tokens. These tokens give access to the assistant-ui API for a specific workspace.
Tokens are short lived (5 minutes), so the client needs to periodically request a new token (handled by assistant-ui).
There are two supported approaches to obtain a workspace auth token:
* Direct integration with your auth provider
* From a backend server / serverless function
### Choosing the right approach
Direct integration with your auth provider:
* simpler to setup and maintain
* assigns a workspace\_id to every user (by using the user\_id as the workspace\_id)
* requires a supported auth provider (Clerk, Auth0, Supabase, Firebase, Stytch, Kinde, ...)
Backend server:
* more complex to setup
* more flexible workspace structure (multi-user workspaces, workspaces per project, etc.)
* supports self hosted auth solutions, e.g. Auth.js
* requires a backend server / serverless function
You can always switch between the two approaches without any downtime or necessary database migrations.
Choose direct integration with your auth provider if you can. Otherwise, use a backend server.
### Auth Provider Integration
In the AssistantUI dashboard, go to the "Auth Integrations" tab and add a new integration.
Follow the steps to add your auth provider. (See the auth providers we have guides for at the bottom of this page.)
Then, pass in a function to `authToken` that returns an ID token from your auth provider.
```ts
import { AssistantCloud } from "@assistant-ui/react";
const assistantCloud = new AssistantCloud({
authToken: () => JWT_TOKEN
});
```
### Integration with an Auth Provider
#### Backend API Endpoint
The following is an api route example to create an auth token based on an authenticated user's orgId and userId.
In the Assistant Cloud dashboard, go to the "API Keys" tab and add a new API key, add the key the environment variable `ASSISTANT_API_KEY=[KEY]`
```ts title="/app/api/assistant-ui-token/route.ts"
import { AssistantCloud } from "@assistant-ui/react";
import { auth } from "@clerk/nextjs/server";
export const POST = async (req: Request) => {
const { userId, orgId } = await auth();
if (!userId) throw new Error("User not authenticated");
const workspaceId = orgId ? `${orgId}_${userId}` : userId;
const assistantCloud = new AssistantCloud({
apiKey: process.env["ASSISTANT_API_KEY"]!,
userId,
workspaceId,
});
const {token} = await assistantCloud.auth.tokens.create();
return new Response(token);
};
```
#### Frontend Implementation
The following is an api route example to create an auth token based on an authenticated user's orgId and userId.
```ts title="client.ts"
const cloud = new AssistantCloud({
baseUrl: process.env["NEXT_PUBLIC_ASSISTANT_BASE_URL"]!,
authToken: () =>
fetch("/api/assistant-ui-token", { method: "POST" }).then((r) =>
r.text(),
),
});
const runtime = useChatRuntime({
api: "/api/chat",
cloud,
});
```
### Anonymous (without auth provider) Frontend Implementation
The following is a example to get auth tokens for Clerk based on the org\_id and user\_id:
```ts title="/app/api/assistant-ui-token/route.ts"
import { AssistantCloud } from "@assistant-ui/react";
const cloud = new AssistantCloud({
baseUrl: process.env["NEXT_PUBLIC_ASSISTANT_BASE_URL"]!,
anonymous: true,
});
const runtime = useChatRuntime({
api: "/api/chat",
cloud,
});
return (
);
```
### Setting up the Clerk Auth Provider
First, go to the Clerk dashboard and under "Configure" tab, "JWT Templates" section, create a new template. Choose a blank template and name it "assistant-ui".
As the "Claims" field, enter the following:
```json
{
"aud": "assistant-ui"
}
```
Note: The aud claim ensures that the JWT is only valid for the
assistant-ui API.
You can leave everything else as default. Take note of the "Issuer" and "JWKS Endpoint" fields.
Then, In the assistant-cloud dashboard, navigate to the "Auth Rules" tab and create a new rule. Choose "Clerk" and enter the Issuer and JWKS Endpoint from the previous step. As the "Audience" field, enter "assistant-ui".
# Overview
URL: /docs/cloud/overview
Hosted service for thread management, chat history, and user authentication.
***
title: Overview
description: Hosted service for thread management, chat history, and user authentication.
-----------------------------------------------------------------------------------------
Assistant Cloud is a hosted service built for assistant-ui frontends that offers comprehensive thread management and message history. It automatically persists threads, supports human-in-the-loop workflows, and integrates with common auth providers to seamlessly allow users to resume conversations at any point.
## Features
### Thread management
Using our `` component, show the users a list of conversations. Allow the users to seamlessly switch between conversations and even let long running tasks run in the background.
Assistant Cloud automatically persists a list of threads and corresponding metadata. It also automatically generates a title for conversations based on the initial messages.
Supported backends:
* AI SDK
* LangGraph
* Custom
### Chat history
For every conversation, Assistant Cloud can store the history of messages, allowing the user to resume the conversation at any point in time.
This supports human in the loop workflows, where the execution of an agent is interrupted until user feedback is collected.
Supported backends:
* AI SDK
* LangGraph
* Custom (currently only Local Runtime)
### Authorization
Assistant Cloud integrates with your auth provider (Clerk, Auth0, Supabase, Firebase, ...) to identify your users and authorize them to access just the conversations they are allowed to see.
Supported auth providers:
* Clerk
* Auth0
* Supabase
* Firebase
* Your own
## Getting Started
To get started, create an account at [Assistant Cloud Dashboard](https://cloud.assistant-ui.com/) and follow one of the walkthroughs for your preferred backend:
* [AI SDK](/docs/cloud/persistence/ai-sdk)
* [LangGraph](/docs/cloud/persistence/langgraph)
You can also check out our example repositories to see how to integrate Assistant Cloud with your frontend:
* [With AI SDK](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud)
* [With LangGraph](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-langgraph)
# Assistant Transport
URL: /docs/runtimes/assistant-transport
Stream agent state to the frontend and handle user commands for custom agents.
***
title: Assistant Transport
description: Stream agent state to the frontend and handle user commands for custom agents.
-------------------------------------------------------------------------------------------
If you've built an agent as a Python or TypeScript script and want to add a UI to it, you need to solve two problems: streaming updates to the frontend and integrating with the UI framework. Assistant Transport handles both.
Assistant Transport streams your agent's complete state to the frontend in real-time. Unlike traditional approaches that only stream predefined message types (like text or tool calls), it streams your entire agent state—whatever structure your agent uses internally.
It consists of:
* **State streaming**: Efficiently streams updates to your agent state (supports any JSON object)
* **UI integration**: Converts your agent's state into assistant-ui components that render in the browser
* **Command handling**: Sends user actions (messages, tool executions, custom commands) back to your agent
## When to Use Assistant Transport
Use Assistant Transport when:
* You don't have a streaming protocol yet and need one
* You want your agent's native state to be directly accessible in the frontend
* You're building a custom agent framework or one without a streaming protocol (e.g. OSS LangGraph)
## Mental Model
```mermaid
graph LR
Frontend -->|Commands| Agent[Agent Server]
Agent -->|State Snapshots| Frontend
```
The frontend receives state snapshots and converts them to React components. The goal is to have the UI be a stateless view on top of the agent framework state.
The agent server receives commands from the frontend. When a user interacts with the UI (sends a message, clicks a button, etc.), the frontend queues a command and sends it to the backend. Assistant Transport defines standard commands like `add-message` and `add-tool-result`, and you can define custom commands.
### Command Lifecycle
Commands go through the following lifecycle:
```mermaid
graph LR
queued -->|sent to backend| in_transit
in_transit -->|backend processes| applied
```
The runtime alternates between **idle** (no active backend request) and **sending** (request in flight). When a new command is created while idle, it's immediately sent. Otherwise, it's queued until the current request completes.
```mermaid
graph LR
idle -->|new command| sending
sending -->|request completes| check{check queue}
check -->|queue has commands| sending
check -->|queue empty| idle
```
To implement this architecture, you need to build 2 pieces:
1. **Backend endpoint** on the agent server that accepts commands and returns a stream of state snapshots
2. **Frontend-side [state converter](#state-converter)** that converts state snapshots to assistant-ui's data format so that the UI primitives work
## Building a Backend Endpoint
Let's build the backend endpoint step by step. You'll need to handle incoming commands, update your agent state, and stream the updates back to the frontend.
The backend endpoint receives POST requests with the following payload:
```typescript
{
state: T, // The previous state that the frontend has access to
commands: AssistantTransportCommand[],
system?: string,
tools?: ToolDefinition[],
threadId: string // The current thread/conversation identifier
}
```
The backend endpoint returns a stream of state snapshots using the `assistant-stream` library ([npm](https://www.npmjs.com/package/assistant-stream) / [PyPI](https://pypi.org/project/assistant-stream/)).
### Handling Commands
The backend endpoint processes commands from the `commands` array:
```python
for command in request.commands:
if command.type == "add-message":
# Handle adding a user message
elif command.type == "add-tool-result":
# Handle tool execution result
elif command.type == "my-custom-command":
# Handle your custom command
```
### Streaming Updates
To stream state updates, modify `controller.state` within your run callback:
```python
from assistant_stream import RunController, create_run
from assistant_stream.serialization import DataStreamResponse
@app.post("/assistant")
async def chat_endpoint(request: ChatRequest):
async def run_callback(controller: RunController):
# Emits "set" at path ["message"] with value "Hello"
controller.state["message"] = "Hello"
# Emits "append-text" at path ["message"] with value " World"
controller.state["message"] += " World"
# Create and return the stream
stream = create_run(run_callback, state=request.state)
return DataStreamResponse(stream)
```
The state snapshots are automatically streamed to the frontend using the operations described in [Streaming Protocol](#streaming-protocol).
### Backend Reference Implementation
```python
from assistant_stream import RunController, create_run
from assistant_stream.serialization import DataStreamResponse
async def run_callback(controller: RunController):
# Initialize state
if controller.state is None:
controller.state = {}
# Process commands
for command in request.commands:
# Handle commands...
# Run your agent and stream updates
async for event in agent.stream():
# update controller.state
pass
# Create and return the stream
stream = create_run(run_callback, state=request.state)
return DataStreamResponse(stream)
```
```python
from assistant_stream.serialization import DataStreamResponse
from assistant_stream import RunController, create_run
@app.post("/assistant")
async def chat_endpoint(request: ChatRequest):
"""Chat endpoint with custom agent streaming."""
async def run_callback(controller: RunController):
# Initialize controller state
if controller.state is None:
controller.state = {"messages": []}
# Process commands
for command in request.commands:
if command.type == "add-message":
# Add message to messages array
controller.state["messages"].append(command.message)
# Run your custom agent and stream updates
async for message in your_agent.stream():
# Push message to messages array
controller.state["messages"].append(message)
# Create streaming response
stream = create_run(run_callback, state=request.state)
return DataStreamResponse(stream)
```
```python
from assistant_stream.serialization import DataStreamResponse
from assistant_stream import RunController, create_run
from assistant_stream.modules.langgraph import append_langgraph_event
@app.post("/assistant")
async def chat_endpoint(request: ChatRequest):
"""Chat endpoint using LangGraph with streaming."""
async def run_callback(controller: RunController):
# Initialize controller state
if controller.state is None:
controller.state = {}
if "messages" not in controller.state:
controller.state["messages"] = []
input_messages = []
# Process commands
for command in request.commands:
if command.type == "add-message":
text_parts = [
part.text for part in command.message.parts
if part.type == "text" and part.text
]
if text_parts:
input_messages.append(HumanMessage(content=" ".join(text_parts)))
# Create initial state for LangGraph
input_state = {"messages": input_messages}
# Stream events from LangGraph
async for namespace, event_type, chunk in graph.astream(
input_state,
stream_mode=["messages", "updates"],
subgraphs=True
):
append_langgraph_event(
controller.state,
namespace,
event_type,
chunk
)
# Create streaming response
stream = create_run(run_callback, state=request.state)
return DataStreamResponse(stream)
```
Full example: [`python/assistant-transport-backend-langgraph`](https://github.com/assistant-ui/assistant-ui/tree/main/python/assistant-transport-backend-langgraph)
## Streaming Protocol
The assistant-stream state replication protocol allows for streaming updates to an arbitrary JSON object.
### Operations
The protocol supports two operations:
> **Note:** We've found that these two operations are enough to handle all sorts of complex state operations efficiently. `set` handles value updates and nested structures, while `append-text` enables efficient streaming of text content.
#### `set`
Sets a value at a specific path in the JSON object.
```json
// Operation
{ "type": "set", "path": ["status"], "value": "completed" }
// Before
{ "status": "pending" }
// After
{ "status": "completed" }
```
#### `append-text`
Appends text to an existing string value at a path.
```json
// Operation
{ "type": "append-text", "path": ["message"], "value": " World" }
// Before
{ "message": "Hello" }
// After
{ "message": "Hello World" }
```
### Wire Format
The wire format will be migrated to Server-Sent Events (SSE) in a future
release.
The wire format is inspired by [AI SDK's data stream protocol](https://sdk.vercel.ai/docs/ai-sdk-ui/stream-protocol).
**State Update:**
```
aui-state:ObjectStreamOperation[]
```
```
aui-state:[{"type":"set","path":["status"],"value":"completed"}]
```
**Error:**
```
3:string
```
```
3:"error message"
```
## Building a Frontend
Now let's set up the frontend. The state converter is the heart of the integration—it transforms your agent's state into the format assistant-ui expects.
The `useAssistantTransportRuntime` hook is used to configure the runtime. It accepts the following config:
```typescript
{
initialState: T,
api: string,
resumeApi?: string,
converter: (state: T, connectionMetadata: ConnectionMetadata) => AssistantTransportState,
headers?: Record | (() => Promise>),
body?: object,
onResponse?: (response: Response) => void,
onFinish?: () => void,
onError?: (error: Error) => void,
onCancel?: () => void
}
```
### State Converter
The state converter is the core of your frontend integration. It transforms your agent's state into assistant-ui's message format.
```typescript
(
state: T, // Your agent's state
connectionMetadata: {
pendingCommands: Command[], // Commands not yet sent to backend
isSending: boolean // Whether a request is in flight
}
) => {
messages: ThreadMessage[], // Messages to display
isRunning: boolean // Whether the agent is running
}
```
### Converting Messages
Use the `createMessageConverter` API to transform your agent's messages to assistant-ui format:
```typescript
import { unstable_createMessageConverter as createMessageConverter } from "@assistant-ui/react";
// Define your message type
type YourMessageType = {
id: string;
role: "user" | "assistant";
content: string;
timestamp: number;
};
// Define a converter function for a single message
const exampleMessageConverter = (message: YourMessageType) => {
// Transform a single message to assistant-ui format
return {
role: message.role,
content: [{ type: "text", text: message.content }]
};
};
const messageConverter = createMessageConverter(exampleMessageConverter);
const converter = (state: YourAgentState) => {
return {
messages: messageConverter.toThreadMessages(state.messages),
isRunning: false
};
};
```
```typescript
import { unstable_createMessageConverter as createMessageConverter } from "@assistant-ui/react";
import { convertLangChainMessages } from "@assistant-ui/react-langgraph";
const messageConverter = createMessageConverter(convertLangChainMessages);
const converter = (state: YourAgentState) => {
return {
messages: messageConverter.toThreadMessages(state.messages),
isRunning: false
};
};
```
**Reverse mapping:**
The message converter allows you to retrieve the original message format anywhere inside assistant-ui. This lets you access your agent's native message structure from any assistant-ui component:
```typescript
// Get original message(s) from a ThreadMessage anywhere in assistant-ui
const originalMessage = messageConverter.toOriginalMessage(threadMessage);
```
### Optimistic Updates from Commands
The converter also receives `connectionMetadata` which contains pending commands. Use this to show optimistic updates:
```typescript
const converter = (state: State, connectionMetadata: ConnectionMetadata) => {
// Extract pending messages from commands
const optimisticMessages = connectionMetadata.pendingCommands
.filter((c) => c.type === "add-message")
.map((c) => c.message);
return {
messages: [...state.messages, ...optimisticMessages],
isRunning: connectionMetadata.isSending || false
};
};
```
## Handling Errors and Cancellations
The `onError` and `onCancel` callbacks receive an `updateState` function that allows you to update the agent state on the client side without making a server request:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
onError: (error, { commands, updateState }) => {
console.error("Error occurred:", error);
console.log("Commands in transit:", commands);
// Update state to reflect the error
updateState((currentState) => ({
...currentState,
lastError: error.message,
}));
},
onCancel: ({ commands, updateState }) => {
console.log("Request cancelled");
console.log("Commands in transit or queued:", commands);
// Update state to reflect cancellation
updateState((currentState) => ({
...currentState,
status: "cancelled",
}));
},
});
```
> **Note:** `onError` receives commands that were in transit, while `onCancel` receives both in-transit and queued commands.
## Custom Headers and Body
You can pass custom headers and body to the backend endpoint:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
headers: {
"Authorization": "Bearer token",
"X-Custom-Header": "value",
},
body: {
customField: "value",
},
});
```
### Dynamic Headers and Body
You can also evaluate the header and body payloads on every request by passing an async function:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
headers: async () => ({
"Authorization": `Bearer ${await getAccessToken()}`,
"X-Request-ID": crypto.randomUUID(),
}),
body: async () => ({
customField: "value",
requestId: crypto.randomUUID(),
timestamp: Date.now(),
}),
});
```
## Resuming from a Sync Server
We provide a sync server currently only as part of the enterprise plan. Please
contact us for more information.
To enable resumability, you need to:
1. Pass a `resumeApi` URL to `useAssistantTransportRuntime` that points to your sync server
2. Use the `unstable_resumeRun` API to resume a conversation
```typescript
import { useAui } from "@assistant-ui/react";
const runtime = useAssistantTransportRuntime({
// ... other options
api: "http://localhost:8010/assistant",
resumeApi: "http://localhost:8010/resume", // Sync server endpoint
// ... other options
});
// Typically called on thread switch or mount to check if sync server has anything to resume
const aui = useAui();
aui.thread().unstable_resumeRun({
parentId: null, // Ignored (will be removed in a future version)
});
```
## Accessing Runtime State
Use the `useAssistantTransportState` hook to access the current agent state from any component:
```typescript
import { useAssistantTransportState } from "@assistant-ui/react";
function MyComponent() {
const state = useAssistantTransportState();
return
{JSON.stringify(state)}
;
}
```
You can also pass a selector function to extract specific values:
```typescript
function MyComponent() {
const messages = useAssistantTransportState((state) => state.messages);
return
Message count: {messages.length}
;
}
```
### Type Safety
Use module augmentation to add types for your agent state:
```typescript title="assistant.config.ts"
import "@assistant-ui/react";
declare module "@assistant-ui/react" {
namespace Assistant {
interface ExternalState {
myState: {
messages: Message[];
customField: string;
};
}
}
}
```
> **Note:** Place this file anywhere in your project (e.g., `src/assistant.config.ts` or at the project root). TypeScript will automatically pick up the type augmentation through module resolution—you don't need to import this file anywhere.
After adding the type augmentation, `useAssistantTransportState` will be fully typed:
```typescript
function MyComponent() {
// TypeScript knows about your custom fields
const customField = useAssistantTransportState((state) => state.customField);
return
{customField}
;
}
```
### Accessing the Original Message
If you're using `createMessageConverter`, you can access the original message format from any assistant-ui component using the converter's `toOriginalMessage` method:
```typescript
import { unstable_createMessageConverter as createMessageConverter } from "@assistant-ui/react";
import { useMessage } from "@assistant-ui/react";
const messageConverter = createMessageConverter(yourMessageConverter);
function MyMessageComponent() {
const message = useMessage();
// Get the original message(s) from the converted ThreadMessage
const originalMessage = messageConverter.toOriginalMessage(message);
// Access your agent's native message structure
return
{originalMessage.yourCustomField}
;
}
```
You can also use `toOriginalMessages` to get all original messages when a ThreadMessage was created from multiple source messages:
```typescript
const originalMessages = messageConverter.toOriginalMessages(message);
```
## Frontend Reference Implementation
```tsx
"use client";
import {
AssistantRuntimeProvider,
AssistantTransportConnectionMetadata,
useAssistantTransportRuntime,
} from "@assistant-ui/react";
type State = {
messages: Message[];
};
// Converter function: transforms agent state to assistant-ui format
const converter = (
state: State,
connectionMetadata: AssistantTransportConnectionMetadata,
) => {
// Add optimistic updates for pending commands
const optimisticMessages = connectionMetadata.pendingCommands
.filter((c) => c.type === "add-message")
.map((c) => c.message);
return {
messages: [...state.messages, ...optimisticMessages],
isRunning: connectionMetadata.isSending || false,
};
};
export function MyRuntimeProvider({ children }) {
const runtime = useAssistantTransportRuntime({
initialState: {
messages: [],
},
api: "http://localhost:8010/assistant",
converter,
headers: async () => ({
"Authorization": "Bearer token",
}),
body: {
"custom-field": "custom-value",
},
onResponse: (response) => {
console.log("Response received from server");
},
onFinish: () => {
console.log("Conversation completed");
},
onError: (error, { commands, updateState }) => {
console.error("Assistant transport error:", error);
console.log("Commands in transit:", commands);
},
onCancel: ({ commands, updateState }) => {
console.log("Request cancelled");
console.log("Commands in transit or queued:", commands);
},
});
return (
{children}
);
}
```
```tsx
"use client";
import {
AssistantRuntimeProvider,
AssistantTransportConnectionMetadata,
unstable_createMessageConverter as createMessageConverter,
useAssistantTransportRuntime,
} from "@assistant-ui/react";
import {
convertLangChainMessages,
LangChainMessage,
} from "@assistant-ui/react-langgraph";
type State = {
messages: LangChainMessage[];
};
const LangChainMessageConverter = createMessageConverter(
convertLangChainMessages,
);
// Converter function: transforms agent state to assistant-ui format
const converter = (
state: State,
connectionMetadata: AssistantTransportConnectionMetadata,
) => {
// Add optimistic updates for pending commands
const optimisticStateMessages = connectionMetadata.pendingCommands.map(
(c): LangChainMessage[] => {
if (c.type === "add-message") {
return [
{
type: "human" as const,
content: [
{
type: "text" as const,
text: c.message.parts
.map((p) => (p.type === "text" ? p.text : ""))
.join("\n"),
},
],
},
];
}
return [];
},
);
const messages = [...state.messages, ...optimisticStateMessages.flat()];
return {
messages: LangChainMessageConverter.toThreadMessages(messages),
isRunning: connectionMetadata.isSending || false,
};
};
export function MyRuntimeProvider({ children }) {
const runtime = useAssistantTransportRuntime({
initialState: {
messages: [],
},
api: "http://localhost:8010/assistant",
converter,
headers: async () => ({
"Authorization": "Bearer token",
}),
body: {
"custom-field": "custom-value",
},
onResponse: (response) => {
console.log("Response received from server");
},
onFinish: () => {
console.log("Conversation completed");
},
onError: (error, { commands, updateState }) => {
console.error("Assistant transport error:", error);
console.log("Commands in transit:", commands);
},
onCancel: ({ commands, updateState }) => {
console.log("Request cancelled");
console.log("Commands in transit or queued:", commands);
},
});
return (
{children}
);
}
```
Full example: [`examples/with-assistant-transport`](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-assistant-transport)
## Custom Commands
### Defining Custom Commands
Use module augmentation to define a custom command:
```typescript title="assistant.config.ts"
import "@assistant-ui/react";
declare module "@assistant-ui/react" {
namespace Assistant {
interface Commands {
myCustomCommand: {
type: "my-custom-command";
data: string;
};
}
}
}
```
### Issuing Commands
Use the `useAssistantTransportSendCommand` hook to send custom commands:
```typescript
import { useAssistantTransportSendCommand } from "@assistant-ui/react";
function MyComponent() {
const sendCommand = useAssistantTransportSendCommand();
const handleClick = () => {
sendCommand({
type: "my-custom-command",
data: "Hello, world!",
});
};
return ;
}
```
### Backend Integration
The backend receives custom commands in the `commands` array, just like built-in commands:
```python
for command in request.commands:
if command.type == "add-message":
# Handle add-message command
elif command.type == "add-tool-result":
# Handle add-tool-result command
elif command.type == "my-custom-command":
# Handle your custom command
data = command.data
```
### Optimistic Updates
Update the [state converter](#state-converter) to optimistically handle the custom command:
```typescript
const converter = (state: State, connectionMetadata: ConnectionMetadata) => {
// Filter custom commands from pending commands
const customCommands = connectionMetadata.pendingCommands.filter(
(c) => c.type === "my-custom-command"
);
// Apply optimistic updates based on custom commands
const optimisticState = {
...state,
customData: customCommands.map((c) => c.data),
};
return {
messages: state.messages,
state: optimisticState,
isRunning: connectionMetadata.isSending || false,
};
};
```
### Cancellation and Error Behavior
Custom commands follow the same lifecycle as built-in commands. You can update your `onError` and `onCancel` handlers to take custom commands into account:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
onError: (error, { commands, updateState }) => {
// Check if any custom commands were in transit
const customCommands = commands.filter((c) => c.type === "my-custom-command");
if (customCommands.length > 0) {
// Handle custom command errors
updateState((state) => ({
...state,
customCommandFailed: true,
}));
}
},
onCancel: ({ commands, updateState }) => {
// Check if any custom commands were queued or in transit
const customCommands = commands.filter((c) => c.type === "my-custom-command");
if (customCommands.length > 0) {
// Handle custom command cancellation
updateState((state) => ({
...state,
customCommandCancelled: true,
}));
}
},
});
```
# Data Stream Protocol
URL: /docs/runtimes/data-stream
Integration with data stream protocol endpoints for streaming AI responses.
***
title: Data Stream Protocol
description: Integration with data stream protocol endpoints for streaming AI responses.
----------------------------------------------------------------------------------------
import { InstallCommand } from "@/components/docs/fumadocs/install/install-command";
The `@assistant-ui/react-data-stream` package provides integration with data stream protocol endpoints, enabling streaming AI responses with tool support and state management.
## Overview
The data stream protocol is a standardized format for streaming AI responses that supports:
* **Streaming text responses** with real-time updates
* **Tool calling** with structured parameters and results
* **State management** for conversation context
* **Error handling** and cancellation support
* **Attachment support** for multimodal interactions
## Installation
## Basic Usage
### Set up the Runtime
Use `useDataStreamRuntime` to connect to your data stream endpoint:
```tsx title="app/page.tsx"
"use client";
import { useDataStreamRuntime } from "@assistant-ui/react-data-stream";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { Thread } from "@/components/assistant-ui/thread";
export default function ChatPage() {
const runtime = useDataStreamRuntime({
api: "/api/chat",
});
return (
);
}
```
### Create Backend Endpoint
Your backend endpoint should accept POST requests and return data stream responses:
```typescript title="app/api/chat/route.ts"
import { createAssistantStreamResponse } from "assistant-stream";
export async function POST(request: Request) {
const { messages, tools, system, threadId } = await request.json();
return createAssistantStreamResponse(async (controller) => {
// Process the request with your AI provider
const stream = await processWithAI({
messages,
tools,
system,
});
// Stream the response
for await (const chunk of stream) {
controller.appendText(chunk.text);
}
});
}
```
The request body includes:
* `messages` - The conversation history
* `tools` - Available tool definitions
* `system` - System prompt (if configured)
* `threadId` - The current thread/conversation identifier
## Advanced Configuration
### Custom Headers and Authentication
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: {
"Authorization": "Bearer " + token,
"X-Custom-Header": "value",
},
credentials: "include",
});
```
### Dynamic Headers
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: async () => {
const token = await getAuthToken();
return {
"Authorization": "Bearer " + token,
};
},
});
```
### Dynamic Body
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: async () => ({
"Authorization": `Bearer ${await getAuthToken()}`,
}),
body: async () => ({
requestId: crypto.randomUUID(),
timestamp: Date.now(),
signature: await computeSignature(),
}),
});
```
### Event Callbacks
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
onResponse: (response) => {
console.log("Response received:", response.status);
},
onFinish: (message) => {
console.log("Message completed:", message);
},
onError: (error) => {
console.error("Error occurred:", error);
},
onCancel: () => {
console.log("Request cancelled");
},
});
```
## Tool Integration
Human-in-the-loop tools (using `human()` for tool interrupts) are not supported
in the data stream runtime. If you need human approval workflows or interactive
tool UIs, consider using [LocalRuntime](/docs/runtimes/custom/local) or
[Assistant Cloud](/docs/cloud/overview) instead.
### Frontend Tools
Use the `frontendTools` helper to serialize client-side tools:
```tsx
import { frontendTools } from "@assistant-ui/react-data-stream";
import { makeAssistantTool } from "@assistant-ui/react";
const weatherTool = makeAssistantTool({
toolName: "get_weather",
description: "Get current weather",
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
const weather = await fetchWeather(location);
return `Weather in ${location}: ${weather}`;
},
});
const runtime = useDataStreamRuntime({
api: "/api/chat",
body: {
tools: frontendTools({
get_weather: weatherTool,
}),
},
});
```
### Backend Tool Processing
Your backend should handle tool calls and return results:
```typescript title="Backend tool handling"
// Tools are automatically forwarded to your endpoint
const { tools } = await request.json();
// Process tools with your AI provider
const response = await ai.generateText({
messages,
tools,
// Tool results are streamed back automatically
});
```
## Assistant Cloud Integration
For Assistant Cloud deployments, use `useCloudRuntime`:
```tsx
import { useCloudRuntime } from "@assistant-ui/react-data-stream";
const runtime = useCloudRuntime({
cloud: assistantCloud,
assistantId: "my-assistant-id",
});
```
The `useCloudRuntime` hook is currently under active development and not yet ready for production use.
## Message Conversion
### Framework-Agnostic Conversion (Recommended)
For custom integrations, use the framework-agnostic utilities from `assistant-stream`:
```tsx
import { toGenericMessages, toToolsJSONSchema } from "assistant-stream";
// Convert messages to a generic format
const genericMessages = toGenericMessages(messages);
// Convert tools to JSON Schema format
const toolSchemas = toToolsJSONSchema(tools);
```
The `GenericMessage` format can be easily converted to any LLM provider format:
```tsx
import type { GenericMessage } from "assistant-stream";
// GenericMessage is a union of:
// - { role: "system"; content: string }
// - { role: "user"; content: (GenericTextPart | GenericFilePart)[] }
// - { role: "assistant"; content: (GenericTextPart | GenericToolCallPart)[] }
// - { role: "tool"; content: GenericToolResultPart[] }
```
### AI SDK Specific Conversion
For AI SDK integration, use `toLanguageModelMessages`:
```tsx
import { toLanguageModelMessages } from "@assistant-ui/react-data-stream";
// Convert to AI SDK LanguageModelV2Message format
const languageModelMessages = toLanguageModelMessages(messages, {
unstable_includeId: true, // Include message IDs
});
```
`toLanguageModelMessages` internally uses `toGenericMessages` and adds AI SDK-specific transformations.
For new custom integrations, prefer using `toGenericMessages` directly.
## Error Handling
The runtime automatically handles common error scenarios:
* **Network errors**: Automatically retried with exponential backoff
* **Stream interruptions**: Gracefully handled with partial content preservation
* **Tool execution errors**: Displayed in the UI with error states
* **Cancellation**: Clean abort signal handling
## Best Practices
### Performance Optimization
```tsx
// Use React.memo for expensive components
const OptimizedThread = React.memo(Thread);
// Memoize runtime configuration
const runtimeConfig = useMemo(() => ({
api: "/api/chat",
headers: { "Authorization": `Bearer ${token}` },
}), [token]);
const runtime = useDataStreamRuntime(runtimeConfig);
```
### Error Boundaries
```tsx
import { ErrorBoundary } from "react-error-boundary";
function ChatErrorFallback({ error, resetErrorBoundary }) {
return (
Something went wrong:
{error.message}
);
}
export default function App() {
return (
);
}
```
### State Persistence
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
body: {
// Include conversation state
state: conversationState,
},
onFinish: (message) => {
// Save state after each message
saveConversationState(message.metadata.unstable_state);
},
});
```
## Examples
* **[Basic Data Stream Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-data-stream)** - Simple streaming chat
* **[Tool Integration Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-data-stream-tools)** - Frontend and backend tools
* **[Authentication Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-data-stream-auth)** - Secure endpoints
## API Reference
For detailed API documentation, see the [`@assistant-ui/react-data-stream` API Reference](/docs/api-reference/integrations/react-data-stream).
# Helicone
URL: /docs/runtimes/helicone
Configure Helicone proxy for OpenAI API logging and monitoring.
***
title: Helicone
description: Configure Helicone proxy for OpenAI API logging and monitoring.
----------------------------------------------------------------------------
Helicone acts as a proxy for your OpenAI API calls, enabling detailed logging and monitoring. To integrate, update your API base URL and add the Helicone-Auth header.
## AI SDK by vercel
1. **Set Environment Variables:**
* `HELICONE_API_KEY`
* `OPENAI_API_KEY`
2. **Configure the OpenAI client:**
```ts
import { createOpenAI } from "@ai-sdk/openai";
import { streamText } from "ai";
const openai = createOpenAI({
baseURL: "https://oai.helicone.ai/v1",
headers: {
"Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
},
});
export async function POST(req: Request) {
const { prompt } = await req.json();
return streamText({
model: openai("gpt-4o"),
prompt,
});
}
```
## LangChain Integration (Python)
1. **Set Environment Variables:**
* `HELICONE_API_KEY`
* `OPENAI_API_KEY`
2. **Configure ChatOpenAI:**
```python
from langchain.chat_models import ChatOpenAI
import os
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
openai_api_base="https://oai.helicone.ai/v1",
openai_api_key=os.environ["OPENAI_API_KEY"],
openai_api_headers={"Helicone-Auth": f"Bearer {os.environ['HELICONE_API_KEY']}"}
)
```
## Summary
Update your API base URL to `https://oai.helicone.ai/v1` and add the `Helicone-Auth` header with your API key either in your Vercel AI SDK or LangChain configuration.
# LangChain LangServe
URL: /docs/runtimes/langserve
Connect to LangServe endpoints via Vercel AI SDK integration.
***
title: LangChain LangServe
description: Connect to LangServe endpoints via Vercel AI SDK integration.
--------------------------------------------------------------------------
This integration has not been tested with AI SDK v5.
## Overview
Integration with a LangServe server via Vercel AI SDK.
## Getting Started
import { InstallCommand } from "@/components/docs/fumadocs/install/install-command";
### Create a Next.JS project
```sh
npx create-next-app@latest my-app
cd my-app
```
### Install `@langchain/core`, AI SDK and `@assistant-ui/react`
### Setup a backend route under `/api/chat`
```tsx title="@/app/api/chat/route.ts"
import { RemoteRunnable } from "@langchain/core/runnables/remote";
import { toDataStreamResponse } from "@ai-sdk/langchain";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
// TODO replace with your own langserve URL
const remoteChain = new RemoteRunnable({
url: "",
});
const stream = await remoteChain.stream({
messages,
});
return toDataStreamResponse(stream);
}
```
### Define a `MyRuntimeProvider` component
```tsx twoslash include MyRuntimeProvider title="@/app/MyRuntimeProvider.tsx"
// @filename: /app/MyRuntimeProvider.tsx
// ---cut---
"use client";
import { useChat } from "@ai-sdk/react";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
export function MyRuntimeProvider({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
const runtime = useChatRuntime();
return (
{children}
);
}
```
### Wrap your app in `MyRuntimeProvider`
```tsx twoslash title="@/app/layout.tsx"
// @include: MyRuntimeProvider
// @filename: /app/layout.tsx
// ---cut---
import type { ReactNode } from "react";
import { MyRuntimeProvider } from "@/app/MyRuntimeProvider";
export default function RootLayout({
children,
}: Readonly<{
children: ReactNode;
}>) {
return (
{children}
);
}
```
# Picking a Runtime
URL: /docs/runtimes/pick-a-runtime
Which runtime fits your backend? Decision guide for common setups.
***
title: Picking a Runtime
description: Which runtime fits your backend? Decision guide for common setups.
-------------------------------------------------------------------------------
Choosing the right runtime is crucial for your assistant-ui implementation. This guide helps you navigate the options based on your specific needs.
## Quick Decision Tree
```mermaid
graph TD
A[What's your starting point?] --> B{Existing Framework?}
B -->|Vercel AI SDK| C[Use AI SDK Integration]
B -->|LangGraph| D[Use LangGraph Runtime]
B -->|LangServe| E[Use LangServe Runtime]
B -->|Mastra| F[Use Mastra Runtime]
B -->|Custom Backend| G{State Management?}
G -->|Let assistant-ui handle it| H[Use LocalRuntime]
G -->|I'll manage it myself| I[Use ExternalStoreRuntime]
```
## Core Runtimes
These are the foundational runtimes that power assistant-ui:
## Pre-Built Integrations
For popular frameworks, we provide ready-to-use integrations built on top of our core runtimes:
## Understanding Runtime Architecture
### How Pre-Built Integrations Work
The pre-built integrations (AI SDK, LangGraph, etc.) are **not separate runtime types**. They're convenient wrappers built on top of our core runtimes:
* **AI SDK Integration** → Built on `LocalRuntime` with streaming adapter
* **LangGraph Runtime** → Built on `LocalRuntime` with graph execution adapter
* **LangServe Runtime** → Built on `LocalRuntime` with LangServe client adapter
* **Mastra Runtime** → Built on `LocalRuntime` with workflow adapter
This means you get all the benefits of `LocalRuntime` (automatic state management, built-in features) with zero configuration for your specific framework.
### When to Use Pre-Built vs Core Runtimes
**Use a pre-built integration when:**
* You're already using that framework
* You want the fastest possible setup
* The integration covers your needs
**Use a core runtime when:**
* You have a custom backend
* You need features not exposed by the integration
* You want full control over the implementation
Pre-built integrations can always be replaced with a custom `LocalRuntime` or `ExternalStoreRuntime` implementation if you need more control later.
## Feature Comparison
### Core Runtime Capabilities
| Feature | `LocalRuntime` | `ExternalStoreRuntime` |
| -------------------- | -------------- | ----------------------- |
| **State Management** | Automatic | You control |
| **Setup Complexity** | Simple | Moderate |
| **Message Editing** | Built-in | Implement `onEdit` |
| **Branch Switching** | Built-in | Implement `setMessages` |
| **Regeneration** | Built-in | Implement `onReload` |
| **Cancellation** | Built-in | Implement `onCancel` |
| **Multi-thread** | Via adapters | Via adapters |
### Available Adapters
| Adapter | `LocalRuntime` | `ExternalStoreRuntime` |
| ----------- | -------------- | ---------------------- |
| ChatModel | ✅ Required | ❌ N/A |
| Attachments | ✅ | ✅ |
| Speech | ✅ | ✅ |
| Feedback | ✅ | ✅ |
| History | ✅ | ❌ Use your state |
| Suggestions | ✅ | ❌ Use your state |
## Common Implementation Patterns
### Vercel AI SDK with Streaming
```tsx
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
export function MyAssistant() {
const runtime = useChatRuntime({
api: "/api/chat",
});
return (
);
}
```
### Custom Backend with `LocalRuntime`
```tsx
import { useLocalRuntime } from "@assistant-ui/react";
const runtime = useLocalRuntime({
async run({ messages, abortSignal }) {
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages }),
signal: abortSignal,
});
return response.json();
},
});
```
### Redux Integration with `ExternalStoreRuntime`
```tsx
import { useExternalStoreRuntime } from "@assistant-ui/react";
const messages = useSelector(selectMessages);
const dispatch = useDispatch();
const runtime = useExternalStoreRuntime({
messages,
onNew: async (message) => {
dispatch(addUserMessage(message));
const response = await api.chat(message);
dispatch(addAssistantMessage(response));
},
setMessages: (messages) => dispatch(setMessages(messages)),
onEdit: async (message) => dispatch(editMessage(message)),
onReload: async (parentId) => dispatch(reloadMessage(parentId)),
});
```
## Examples
Explore our implementation examples:
* **[AI SDK v6 Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-ai-sdk-v6)** - Vercel AI SDK with `useChatRuntime`
* **[External Store Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-external-store)** - `ExternalStoreRuntime` with custom state
* **[Assistant Cloud Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud)** - Multi-thread with cloud persistence
* **[LangGraph Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-langgraph)** - Agent workflows
* **[OpenAI Assistants Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-openai-assistants)** - OpenAI Assistants API
## Common Pitfalls to Avoid
### LocalRuntime Pitfalls
* **Forgetting the adapter**: `LocalRuntime` requires a `ChatModelAdapter` - it won't work without one
* **Not handling errors**: Always handle API errors in your adapter's `run` function
* **Missing abort signal**: Pass `abortSignal` to your fetch calls for proper cancellation
### ExternalStoreRuntime Pitfalls
* **Mutating state**: Always create new arrays/objects when updating messages
* **Missing handlers**: Each UI feature requires its corresponding handler (e.g., no edit button without `onEdit`)
* **Forgetting optimistic updates**: Set `isRunning` to `true` for loading states
### General Pitfalls
* **Wrong integration level**: Don't use `LocalRuntime` if you already have Vercel AI SDK - use the AI SDK integration instead
* **Over-engineering**: Start with pre-built integrations before building custom solutions
* **Ignoring TypeScript**: The types will guide you to the correct implementation
## Next Steps
1. **Choose your runtime** based on the decision tree above
2. **Follow the specific guide**:
* [AI SDK Integration](/docs/runtimes/ai-sdk/use-chat)
* [`LocalRuntime` Guide](/docs/runtimes/custom/local)
* [`ExternalStoreRuntime` Guide](/docs/runtimes/custom/external-store)
* [LangGraph Integration](/docs/runtimes/langgraph)
3. **Start with an example** from our [examples repository](https://github.com/assistant-ui/assistant-ui/tree/main/examples)
4. **Add features progressively** using adapters
5. **Consider Assistant Cloud** for production persistence
Need help? Join our [Discord community](https://discord.gg/S9dwgCNEFs) or check the [GitHub](https://github.com/assistant-ui/assistant-ui).
# AssistantModal
URL: /docs/ui/assistant-modal
Floating chat bubble for support widgets and help desks.
***
title: AssistantModal
description: Floating chat bubble for support widgets and help desks.
---------------------------------------------------------------------
import { ParametersTable } from "@/components/docs/tables/ParametersTable";
import { InstallCommand } from "@/components/docs/fumadocs/install/install-command";
import { AssistantModalSample } from "@/components/docs/samples/assistant-modal";
A floating chat modal built on Radix UI Popover. Ideal for support widgets, help desks, and embedded assistants.
## Getting Started
### Add `assistant-modal`
This adds `/components/assistant-ui/assistant-modal.tsx` to your project, which you can adjust as needed.
### Use in your application
```tsx title="/app/page.tsx" {1,6}
import { AssistantModal } from "@/components/assistant-ui/assistant-modal";
export default function Home() {
return (
);
}
```
## Anatomy
The AssistantModal component is built with the following primitives:
```tsx
import { AssistantModalPrimitive } from "@assistant-ui/react";
{/* Thread component goes here */}
```
## API Reference
### Root
Contains all parts of the modal. Based on Radix UI Popover.
void",
description: "Callback when the open state changes.",
},
{
name: "unstable_openOnRunStart",
type: "boolean",
description: "Automatically open the modal when the assistant starts running.",
},
]}
/>
### Trigger
A button that toggles the modal open/closed state.
This primitive renders a `