# Architecture
URL: /docs/architecture
How components, runtimes, and cloud services fit together.
import { Sparkles, PanelsTopLeft, Database, Terminal } from "lucide-react";
assistant-ui is built on these main pillars: \[#assistant-ui-is-built-on-these-main-pillars]
Shadcn UI chat components with built-in state management
State management layer connecting UI to LLMs and backend services
Hosted service for thread persistence, history, and user management
1. Frontend components \[#1-frontend-components]
Stylized and functional chat components built on top of Shadcn components that have context state management provided by the assistantUI runtime provider. These pre-built React components come with intelligent state management. [View our components](/docs/ui/thread)
2. Runtime \[#2-runtime]
A React state management context for assistant chat. The runtime handles data conversions between the local state and calls to backends and LLMs. We offer different runtime solutions that work with various frameworks like Vercel AI SDK, LangGraph, LangChain, Helicone, local runtime, and an ExternalStore when you need full control of the frontend message state. [You can view the runtimes we support](/docs/runtimes/pick-a-runtime)
3. Assistant Cloud \[#3-assistant-cloud]
A hosted service that enhances your assistant experience with comprehensive thread management and message history. Assistant Cloud stores complete message history, automatically persists threads, supports human-in-the-loop workflows, and integrates with common auth providers to seamlessly allow users to resume conversations at any point. [Cloud Docs](/docs/cloud/overview)
There are three common ways to architect your assistant-ui application: \[#there-are-three-common-ways-to-architect-your-assistant-ui-application]
1. Direct Integration with External Providers \[#1-direct-integration-with-external-providers]
2. Using your own API endpoint \[#2-using-your-own-api-endpoint]
3. With Assistant Cloud \[#3-with-assistant-cloud]
# CLI
URL: /docs/cli
Scaffold projects, add components, and manage updates from the command line.
Use the `assistant-ui` CLI to quickly set up new projects and add components to existing ones.
init \[#init]
Use the `init` command to initialize configuration and dependencies for a new project.
The `init` command installs dependencies, adds components, and configures your project for assistant-ui.
```bash
npx assistant-ui@latest init
```
This will:
* Detect if you have an existing project with a `package.json`
* Use `shadcn add` to install the assistant-ui quick-start component
* Add the default assistant-ui components (thread, composer, etc.) to your project
* Configure TypeScript paths and imports
**When to use:**
* Adding assistant-ui to an **existing** Next.js project
* First-time setup in a project with `package.json`
**Options**
```bash
Usage: assistant-ui init [options]
initialize assistant-ui in a new or existing project
Options:
-c, --cwd the working directory. defaults to the current directory.
-h, --help display help for command
```
create \[#create]
Use the `create` command to scaffold a new Next.js project with assistant-ui pre-configured.
```bash
npx assistant-ui@latest create [project-directory]
```
This command scaffolds a project from assistant-ui starter templates or examples.
**Available Templates**
| Template | Description | Command |
| ------------- | ------------------------------------ | ---------------------------------------- |
| `default` | Default template with Vercel AI SDK | `npx assistant-ui create` |
| `minimal` | Bare-bones starting point | `npx assistant-ui create -t minimal` |
| `cloud` | Cloud-backed persistence starter | `npx assistant-ui create -t cloud` |
| `cloud-clerk` | Cloud-backed starter with Clerk auth | `npx assistant-ui create -t cloud-clerk` |
| `langgraph` | LangGraph starter template | `npx assistant-ui create -t langgraph` |
| `mcp` | MCP starter template | `npx assistant-ui create -t mcp` |
**Available Examples**
Use `--example` to create a project from one of the monorepo examples with full feature demonstrations:
| Example | Description | Command |
| -------------------------- | ----------------------------------------- | ------------------------------------------------------------ |
| `with-ai-sdk-v6` | Vercel AI SDK v6 integration | `npx assistant-ui create my-app -e with-ai-sdk-v6` |
| `with-artifacts` | HTML artifact rendering with live preview | `npx assistant-ui create my-app -e with-artifacts` |
| `with-langgraph` | LangGraph agent with custom tools | `npx assistant-ui create my-app -e with-langgraph` |
| `with-cloud` | Assistant Cloud persistence | `npx assistant-ui create my-app -e with-cloud` |
| `with-ag-ui` | AG-UI protocol integration | `npx assistant-ui create my-app -e with-ag-ui` |
| `with-assistant-transport` | Custom backend via Assistant Transport | `npx assistant-ui create my-app -e with-assistant-transport` |
| `with-chain-of-thought` | Chain-of-thought with JS execution | `npx assistant-ui create my-app -e with-chain-of-thought` |
| `with-external-store` | External message store | `npx assistant-ui create my-app -e with-external-store` |
| `with-custom-thread-list` | Custom thread list UI | `npx assistant-ui create my-app -e with-custom-thread-list` |
| `with-react-hook-form` | React Hook Form integration | `npx assistant-ui create my-app -e with-react-hook-form` |
| `with-ffmpeg` | FFmpeg video processing tool | `npx assistant-ui create my-app -e with-ffmpeg` |
| `with-elevenlabs-scribe` | ElevenLabs voice transcription | `npx assistant-ui create my-app -e with-elevenlabs-scribe` |
| `with-parent-id-grouping` | Message part grouping | `npx assistant-ui create my-app -e with-parent-id-grouping` |
| `with-react-router` | React Router v7 integration | `npx assistant-ui create my-app -e with-react-router` |
| `with-tanstack` | TanStack Start integration | `npx assistant-ui create my-app -e with-tanstack` |
**Examples**
```bash
# Create with default template
npx assistant-ui@latest create my-app
# Create with cloud template
npx assistant-ui@latest create my-app -t cloud
# Create with cloud + clerk template
npx assistant-ui@latest create my-app -t cloud-clerk
# Create from an example
npx assistant-ui@latest create my-app --example with-langgraph
# Create with specific package manager
npx assistant-ui@latest create my-app --use-pnpm
# Skip package installation
npx assistant-ui@latest create my-app --skip-install
```
**Options**
```bash
Usage: assistant-ui create [project-directory] [options]
create a new project
Arguments:
project-directory name of the project directory
Options:
-t, --template template to use (default, minimal, cloud, cloud-clerk, langgraph, mcp)
-e, --example create from an example (e.g., with-langgraph)
--use-npm explicitly use npm
--use-pnpm explicitly use pnpm
--use-yarn explicitly use yarn
--use-bun explicitly use bun
--skip-install skip installing packages
-h, --help display help for command
```
add \[#add]
Use the `add` command to add individual components to your project.
```bash
npx assistant-ui@latest add [component]
```
The `add` command fetches components from the assistant-ui registry and adds them to your project. It automatically:
* Installs required dependencies
* Adds TypeScript types
* Configures imports
**Popular Components**
```bash
# Add the basic thread component
npx assistant-ui add thread
# Add thread list for multi-conversation support
npx assistant-ui add thread-list
# Add assistant modal
npx assistant-ui add assistant-modal
# Add multiple components at once
npx assistant-ui add thread thread-list assistant-sidebar
```
**Options**
```bash
Usage: assistant-ui add [options]
add a component to your project
Arguments:
components the components to add
Options:
-y, --yes skip confirmation prompt. (default: true)
-o, --overwrite overwrite existing files. (default: false)
-c, --cwd the working directory. defaults to the current directory.
-p, --path the path to add the component to.
-h, --help display help for command
```
update \[#update]
Use the `update` command to update all `@assistant-ui/*` packages to their latest versions.
```bash
npx assistant-ui@latest update
```
This command:
* Scans your `package.json` for assistant-ui packages
* Updates them to the latest versions using your package manager
* Preserves other dependencies
**Examples**
```bash
# Update all assistant-ui packages
npx assistant-ui update
# Dry run to see what would be updated
npx assistant-ui update --dry
```
**Options**
```bash
Usage: assistant-ui update [options]
update all '@assistant-ui/*' packages to latest versions
Options:
--dry print the command instead of running it
-c, --cwd the working directory. defaults to the current directory.
-h, --help display help for command
```
upgrade \[#upgrade]
Use the `upgrade` command to automatically migrate your codebase when there are breaking changes.
```bash
npx assistant-ui@latest upgrade
```
This command:
* Runs codemods to transform your code
* Updates import paths and API usage
* Detects required dependency changes
* Prompts to install new packages
**What it does:**
* Applies all available codemods sequentially
* Shows progress bar with file count
* Reports any transformation errors
* Automatically detects and offers to install new dependencies
**Example output:**
```bash
Starting upgrade...
Found 24 files to process.
Progress |████████████████████| 100% | ETA: 0s || Running v0-11/content-part-to-message-part...
Checking for package dependencies...
✅ Upgrade complete!
```
**Options**
```bash
Usage: assistant-ui upgrade [options]
upgrade and apply codemods for breaking changes
Options:
-d, --dry dry run (no changes are made to files)
-p, --print print transformed files to stdout
--verbose show more information about the transform process
-j, --jscodeshift pass options directly to jscodeshift
-h, --help display help for command
```
codemod \[#codemod]
Use the `codemod` command to run a specific codemod transformation.
```bash
npx assistant-ui@latest codemod
```
This is useful when you want to run a specific migration rather than all available upgrades.
**Examples**
```bash
# Run specific codemod on a directory
npx assistant-ui codemod v0-11/content-part-to-message-part ./src
# Run with dry run to preview changes
npx assistant-ui codemod v0-11/content-part-to-message-part ./src --dry
# Print transformed output
npx assistant-ui codemod v0-11/content-part-to-message-part ./src --print
```
**Options**
```bash
Usage: assistant-ui codemod [options]
run a specific codemod transformation
Arguments:
codemod codemod to run
source path to source files or directory to transform
Options:
-d, --dry dry run (no changes are made to files)
-p, --print print transformed files to stdout
--verbose show more information about the transform process
-j, --jscodeshift pass options directly to jscodeshift
-h, --help display help for command
```
mcp \[#mcp]
Use the `mcp` command to install the assistant-ui MCP docs server for your IDE.
```bash
npx assistant-ui@latest mcp
```
This command configures the [Model Context Protocol](/docs/llm#mcp) server, giving your AI assistant direct access to assistant-ui documentation.
**Examples**
```bash
# Interactive - prompts to select IDE
npx assistant-ui mcp
# Install for specific IDE
npx assistant-ui mcp --cursor
npx assistant-ui mcp --windsurf
npx assistant-ui mcp --vscode
npx assistant-ui mcp --zed
npx assistant-ui mcp --claude-code
npx assistant-ui mcp --claude-desktop
```
**Options**
```bash
Usage: assistant-ui mcp [options]
install assistant-ui MCP docs server for your IDE
Options:
--cursor install for Cursor
--windsurf install for Windsurf
--vscode install for VSCode
--zed install for Zed
--claude-code install for Claude Code
--claude-desktop install for Claude Desktop
-h, --help display help for command
```
Common Workflows \[#common-workflows]
Starting a new project \[#starting-a-new-project]
```bash
# Create a new project with the default template
npx assistant-ui@latest create my-chatbot
# Navigate into the directory
cd my-chatbot
# Start development
npm run dev
```
Adding to existing project \[#adding-to-existing-project]
```bash
# Initialize assistant-ui
npx assistant-ui@latest init
# Add additional components
npx assistant-ui@latest add thread-list assistant-modal
# Start development
npm run dev
```
Keeping up to date \[#keeping-up-to-date]
```bash
# Check for updates (dry run)
npx assistant-ui@latest update --dry
# Update all packages
npx assistant-ui@latest update
# Run upgrade codemods if needed
npx assistant-ui@latest upgrade
```
Migrating versions \[#migrating-versions]
```bash
# Run automated migration
npx assistant-ui@latest upgrade
# Or run specific codemod
npx assistant-ui@latest codemod v0-11/content-part-to-message-part ./src
# Update packages after migration
npx assistant-ui@latest update
```
Component Registry \[#component-registry]
The CLI pulls components from our public registry at [r.assistant-ui.com](https://r.assistant-ui.com).
Each component includes:
* Full TypeScript source code
* All required dependencies
* Tailwind CSS configuration
* Usage examples
Components are added directly to your project's source code, giving you full control to customize them.
Troubleshooting \[#troubleshooting]
Command not found \[#command-not-found]
If you get a "command not found" error, make sure you're using `npx`:
```bash
npx assistant-ui@latest init
```
Permission errors \[#permission-errors]
On Linux/macOS, if you encounter permission errors:
```bash
sudo npx assistant-ui@latest init
```
Or fix npm permissions: [https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally](https://docs.npmjs.com/resolving-eacces-permissions-errors-when-installing-packages-globally)
Conflicting dependencies \[#conflicting-dependencies]
If you see dependency conflicts:
```bash
# Try with --force flag
npm install --force
# Or use legacy peer deps
npm install --legacy-peer-deps
```
Component already exists \[#component-already-exists]
Use the `--overwrite` flag to replace existing components:
```bash
npx assistant-ui@latest add thread --overwrite
```
Configuration \[#configuration]
The CLI respects your project's configuration:
* **Package Manager**: Automatically detects npm, pnpm, yarn, or bun
* **TypeScript**: Works with your `tsconfig.json` paths
* **Tailwind**: Uses your `tailwind.config.js` settings
* **Import Aliases**: Respects `components.json` or `assistant-ui.json` configuration
# DevTools
URL: /docs/devtools
Inspect runtime state, context, and events in the browser.
Hey, the assistant-ui DevTools allows you to debug the assistant-ui state and context, and events without resorting to `console.log`. It's an easy way to see how data flows to the assistant-ui's runtime layer.
Setup \[#setup]
Install the DevTools package \[#install-the-devtools-package]
Mount the DevTools modal \[#mount-the-devtools-modal]
```tsx
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { DevToolsModal } from "@assistant-ui/react-devtools";
export function AssistantApp() {
return (
{/* ...your assistant-ui... */}
);
}
```
Verify the DevTools overlay \[#verify-the-devtools-overlay]
That's it! In development builds you should now see the DevTools in the lower-right corner of your site.
# Introduction
URL: /docs
Beautiful, enterprise-grade AI chat interfaces for React applications.
import { Sparkles, PanelsTopLeft, Database, Terminal, Bot } from "lucide-react";
assistant-ui helps you create beautiful, enterprise-grade AI chat interfaces in minutes. Whether you're building a ChatGPT clone, a customer support chatbot, an AI assistant, or a complex multi-agent application, assistant-ui provides the frontend primitive components and state management layers to focus on what makes your application unique.
Already using the AI SDK with your own UI? Add [cloud persistence with just one hook](/docs/cloud/ai-sdk), no UI library required.
Key Features \[#key-features]
} title="Instant Chat UI">
Pre-built beautiful, customizable chat interfaces out of the box. Easy to quickly iterate on your idea.
} title="Chat State Management">
Powerful state management for chat interactions, optimized for streaming responses and efficient rendering.
} title="High Performance">
Optimized for speed and efficiency with minimal bundle size, ensuring your AI chat interfaces remain responsive.
} title="Framework Agnostic">
Easily integrate with any backend system, whether using Vercel AI SDK, direct LLM connections, or custom solutions. Works with any React-based framework.
Quick Try \[#quick-try]
The fastest way to get started:
```sh
npx assistant-ui@latest create
```
This creates a new project with everything configured. Or choose a template:
```sh
# Minimal starter
npx assistant-ui@latest create -t minimal
# Assistant Cloud - with persistence and thread management
npx assistant-ui@latest create -t cloud
# Assistant Cloud + Clerk authentication
npx assistant-ui@latest create -t cloud-clerk
# LangGraph starter template
npx assistant-ui@latest create -t langgraph
# MCP starter template
npx assistant-ui@latest create -t mcp
```
What's Next? \[#whats-next]
# Installation
URL: /docs/installation
Get assistant-ui running in 5 minutes with npm and your first chat component.
Quick Start \[#quick-start]
The fastest way to get started with assistant-ui.
Initialize assistant-ui \[#initialize-assistant-ui]
**Create a new project:**
```sh
npx assistant-ui@latest create
```
Or choose a template:
```sh
# Minimal starter
npx assistant-ui@latest create -t minimal
# Assistant Cloud - with persistence and thread management
npx assistant-ui@latest create -t cloud
# Assistant Cloud + Clerk authentication
npx assistant-ui@latest create -t cloud-clerk
# LangGraph starter template
npx assistant-ui@latest create -t langgraph
# MCP starter template
npx assistant-ui@latest create -t mcp
```
**Add to an existing project:**
```sh
npx assistant-ui@latest init
```
Add API key \[#add-api-key]
Create a `.env` file with your API key:
```
OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
Start the app \[#start-the-app]
```sh
npm run dev
```
Manual Setup \[#manual-setup]
If you prefer not to use the CLI, you can install components manually.
Add assistant-ui \[#add-assistant-ui]
Setup Backend Endpoint \[#setup-backend-endpoint]
Install provider SDK:
Add an API endpoint:
<>
```ts title="/app/api/chat/route.ts"
import { openai } from "@ai-sdk/openai";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4o-mini"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { anthropic } from "@ai-sdk/anthropic";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: anthropic("claude-3-5-sonnet-20240620"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { azure } from "@ai-sdk/azure";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: azure("your-deployment-name"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { bedrock } from "@ai-sdk/amazon-bedrock";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: bedrock("anthropic.claude-3-5-sonnet-20240620-v1:0"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { google } from "@ai-sdk/google";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: google("gemini-2.0-flash"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { vertex } from "@ai-sdk/google-vertex";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: vertex("gemini-1.5-pro"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { createOpenAI } from "@ai-sdk/openai";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
const groq = createOpenAI({
apiKey: process.env.GROQ_API_KEY ?? "",
baseURL: "https://api.groq.com/openai/v1",
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: groq("llama3-70b-8192"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { createOpenAI } from "@ai-sdk/openai";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
const fireworks = createOpenAI({
apiKey: process.env.FIREWORKS_API_KEY ?? "",
baseURL: "https://api.fireworks.ai/inference/v1",
});
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: fireworks("accounts/fireworks/models/firefunction-v2"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { cohere } from "@ai-sdk/cohere";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: cohere("command-r-plus"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { ollama } from "ollama-ai-provider-v2";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: ollama("llama3"),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
```ts title="/app/api/chat/route.ts"
import { chromeai } from "chrome-ai";
import { convertToModelMessages, streamText } from "ai";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: chromeai(),
messages: convertToModelMessages(messages),
});
return result.toUIMessageStreamResponse();
}
```
>
Define environment variables:
<>
```sh title="/.env.local"
OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local"
ANTHROPIC_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local"
AZURE_RESOURCE_NAME="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
AZURE_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local"
AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
AWS_REGION="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local"
GOOGLE_GENERATIVE_AI_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local"
GOOGLE_VERTEX_PROJECT="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
GOOGLE_VERTEX_LOCATION="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
GOOGLE_APPLICATION_CREDENTIALS="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local"
GROQ_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local"
FIREWORKS_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh title="/.env.local"
COHERE_API_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
```
```sh
```
```sh
```
>
If you aren't using Next.js, you can also deploy this endpoint to Cloudflare Workers, or any other serverless platform.
Use it in your app \[#use-it-in-your-app]
<>
```tsx title="/app/page.tsx"
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useChatRuntime, AssistantChatTransport } from "@assistant-ui/react-ai-sdk";
import { ThreadList } from "@/components/assistant-ui/thread-list";
import { Thread } from "@/components/assistant-ui/thread";
const MyApp = () => {
const runtime = useChatRuntime({
transport: new AssistantChatTransport({
api: "/api/chat",
}),
});
return (
);
};
```
```tsx title="/app/page.tsx"
// run `npx shadcn@latest add https://r.assistant-ui.com/assistant-modal.json`
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useChatRuntime, AssistantChatTransport } from "@assistant-ui/react-ai-sdk";
import { AssistantModal } from "@/components/assistant-ui/assistant-modal";
const MyApp = () => {
const runtime = useChatRuntime({
transport: new AssistantChatTransport({
api: "/api/chat",
}),
});
return (
);
};
```
>
What's Next? \[#whats-next]
# Agent Skills
URL: /docs/llm
Use AI tools to build with assistant-ui faster. AI-accessible documentation, Claude Code skills, and MCP integration.
import { FileText } from "lucide-react";
Build faster with AI assistants that understand assistant-ui. This page covers all the ways to give your AI tools access to assistant-ui documentation and context.
AI Accessible Documentation \[#ai-accessible-documentation]
Our docs are designed to be easily accessible to AI assistants:
} title="/llms.txt" href="/llms.txt" external>
Structured index of all documentation pages. Point your AI here for a quick overview.
} title="/llms-full.txt" href="/llms-full.txt" external>
Complete documentation in a single file. Use this for full context.
} title=".mdx suffix">
Add `.mdx` to any page's URL to get raw markdown content (e.g., `/docs/installation.mdx`).
Context Files \[#context-files]
Add assistant-ui context to your project's `CLAUDE.md` or `.cursorrules`:
```md
## assistant-ui
This project uses assistant-ui for chat interfaces.
Documentation: https://www.assistant-ui.com/llms-full.txt
Key patterns:
- Use AssistantRuntimeProvider at the app root
- Thread component for full chat interface
- AssistantModal for floating chat widget
- useChatRuntime hook with AI SDK transport
```
Skills \[#skills]
Install assistant-ui skills for AI Tools:
```sh
npx skills add assistant-ui/skills
```
| Skill | Purpose |
| --------------- | -------------------------------------------------------------------- |
| `/assistant-ui` | General architecture and overview guide |
| `/setup` | Project setup and configuration (AI SDK, LangGraph, custom backends) |
| `/primitives` | UI component primitives (Thread, Composer, Message, etc.) |
| `/runtime` | Runtime system and state management |
| `/tools` | Tool registration and tool UI |
| `/streaming` | Streaming protocol with assistant-stream |
| `/cloud` | Cloud persistence and authorization |
| `/thread-list` | Multi-thread management |
| `/update` | Update assistant-ui and AI SDK to latest versions |
Use by typing the command in Claude Code, e.g., `/assistant-ui` for the main guide or `/setup` when setting up a project.
MCP \[#mcp]
`@assistant-ui/mcp-docs-server` provides direct access to assistant-ui documentation and examples in your IDE via the Model Context Protocol.
Once installed, your AI assistant will understand everything about assistant-ui - just ask naturally:
* "Add a chat interface with streaming support to my app"
* "How do I integrate assistant-ui with the Vercel AI SDK?"
* "My Thread component isn't updating, what could be wrong?"
Quick Install (CLI) \[#quick-install-cli]
```bash
npx add-mcp @assistant-ui/mcp-docs-server
```
Or specify your IDE directly:
```bash
npx add-mcp @assistant-ui/mcp-docs-server -a claude-code
npx add-mcp @assistant-ui/mcp-docs-server -a claude-desktop
npx add-mcp @assistant-ui/mcp-docs-server -a codex
npx add-mcp @assistant-ui/mcp-docs-server -a cursor
npx add-mcp @assistant-ui/mcp-docs-server -a gemini-cli
npx add-mcp @assistant-ui/mcp-docs-server -a opencode
npx add-mcp @assistant-ui/mcp-docs-server -a vscode
npx add-mcp @assistant-ui/mcp-docs-server -a zed
```
Manual Installation \[#manual-installation]
Or add to `.cursor/mcp.json`:
```json
{
"mcpServers": {
"assistant-ui": {
"command": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"]
}
}
}
```
After adding, open Cursor Settings → MCP → find "assistant-ui" and click enable.
Add to `~/.codeium/windsurf/mcp_config.json`:
```json
{
"mcpServers": {
"assistant-ui": {
"command": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"]
}
}
}
```
After adding, fully quit and re-open Windsurf.
Add to `.vscode/mcp.json` in your project:
```json
{
"servers": {
"assistant-ui": {
"command": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"],
"type": "stdio"
}
}
}
```
Enable MCP in Settings → search "MCP" → enable "Chat > MCP". Use GitHub Copilot Chat in Agent mode.
Add to your Zed settings file:
* macOS: `~/.zed/settings.json`
* Linux: `~/.config/zed/settings.json`
* Windows: `%APPDATA%\Zed\settings.json`
Or open via `Cmd/Ctrl + ,` → "Open JSON Settings"
```json
{
"context_servers": {
"assistant-ui": {
"command": {
"path": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"]
}
}
}
}
```
The server starts automatically with the Assistant Panel.
```bash
claude mcp add assistant-ui -- npx -y @assistant-ui/mcp-docs-server
```
The server starts automatically once added.
Add to `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) or `%APPDATA%\Claude\claude_desktop_config.json` (Windows):
```json
{
"mcpServers": {
"assistant-ui": {
"command": "npx",
"args": ["-y", "@assistant-ui/mcp-docs-server"]
}
}
}
```
Restart Claude Desktop after updating the configuration.
Available Tools \[#available-tools]
| Tool | Description |
| --------------------- | --------------------------------------------------------------------------------------- |
| `assistantUIDocs` | Access documentation: getting started, component APIs, runtime docs, integration guides |
| `assistantUIExamples` | Browse code examples: AI SDK, LangGraph, OpenAI Assistants, tool UI patterns |
Troubleshooting \[#troubleshooting]
* **Server not starting**: Ensure `npx` is installed and working. Check configuration file syntax.
* **Tool calls failing**: Restart the MCP server and/or your IDE. Update to latest IDE version.
# Using old React versions
URL: /docs/react-compatibility
Compatibility notes for React 18, 17, and 16.
Older React versions are not continuously tested. If you encounter any issues
with integration, please contact us for help by joining our
[Discord](https://discord.gg/S9dwgCNEFs).
This guide provides instructions for configuring assistant-ui to work with React 18 or older versions.
React 18 \[#react-18]
If you're using React 18, you need to update the shadcn/ui components to work with `forwardRef`. Specifically, you need to modify the Button component.
Updating the Button Component \[#updating-the-button-component]
Navigate to your button component file (typically `/components/ui/button.tsx`) and wrap the Button component with `forwardRef`:
```tsx
// Before
function Button({
className,
variant,
size,
asChild = false,
...props
}: React.ComponentProps<"button"> &
VariantProps & {
asChild?: boolean;
}) {
const Comp = asChild ? Slot : "button";
return (
);
}
// After
const Button = React.forwardRef<
HTMLButtonElement,
React.ComponentProps<"button"> &
VariantProps & {
asChild?: boolean;
}
>(({ className, variant, size, asChild = false, ...props }, ref) => {
const Comp = asChild ? Slot : "button";
return (
);
});
Button.displayName = "Button";
```
**Note:** If you're using a lower React version (17 or 16), you'll also need to follow the instructions for that version.
React 17 \[#react-17]
For React 17 compatibility, in addition to the modifications outlined for React 18, you must also include a polyfill for the `useSyncExternalStore` hook (utilized by zustand).
Patching Zustand with patch-package \[#patching-zustand-with-patch-package]
Since the assistant-ui uses zustand internally, which depends on `useSyncExternalStore`, you'll need to patch the zustand package directly:
1. Install the required packages:
2. Add a postinstall script to your package.json:
```json
{
"scripts": {
"postinstall": "patch-package"
}
}
```
3. You'll want to follow the instructions in [patch-package](https://github.com/ds300/patch-package), by first making changes to the files of a particular package in your node\_modules folder, then running either `yarn patch-package package-name` or `npx patch-package package-name`. You'll need a patch for zustand - within `node_modules/zustand`, open `zustand/react.js` and make the following code changes:
````diff
diff --git a/node_modules/zustand/react.js b/node_modules/zustand/react.js
index 7599cfb..64530a8 100644
--- a/node_modules/zustand/react.js
+++ b/node_modules/zustand/react.js
@@ -1,6 +1,6 @@
'use strict';
-var React = require('react');
+var React = require('use-sync-external-store/shim');
var vanilla = require('zustand/vanilla');
const identity = (arg) => arg;
@@ -10,7 +10,7 @@ function useStore(api, selector = identity) {
() => selector(api.getState()),
() => selector(api.getInitialState())
);
- React.useDebugValue(slice);
+ //React.useDebugValue(slice);
return slice;
}
const createImpl = (createState) => {
This patch replaces the React import in zustand with the polyfill from `use-sync-external-store/shim` and comments out the `useDebugValue` call which isn't needed.
You should then run the patch-package command `yarn patch-package zustand` or `npx patch-package zustand` which should create a `patches` folder with a zustand patch file similar looking to this:
```diff
diff --git a/node_modules/zustand/react.js b/node_modules/zustand/react.js
index 7599cfb..64530a8 100644
--- a/node_modules/zustand/react.js
+++ b/node_modules/zustand/react.js
@@ -1,6 +1,6 @@
'use strict';
-var React = require('react');
+var React = require('use-sync-external-store/shim');
var vanilla = require('zustand/vanilla');
const identity = (arg) => arg;
@@ -10,7 +10,7 @@ function useStore(api, selector = identity) {
() => selector(api.getState()),
() => selector(api.getInitialState())
);
- React.useDebugValue(slice);
+ //React.useDebugValue(slice);
return slice;
}
const createImpl = (createState) => {
````
4. You may also need to apply the same patches within `node_modules/@assistant-ui/react/` and possibly a nested dependency patch for `node_modules/@assistant-ui/react/node_modules/zustand`. Look for instances of `React.useSyncExternalStore` and replace with `{ useSyncExternalStore } from "use-sync-external-store/shim";` and comment out any `useDebugValue` calls. Finally, you may need to patch `useId` from React, so within `node_modules/@assistant-ui/react/dist/runtimes/remote-thread-list/RemoteThreadListThreadListRuntimeCore.js`, change the following:
```diff
-import { Fragment, useEffect, useId } from "react";
+import { Fragment, useEffect, useRef } from "react";
import { create } from "zustand";
import { AssistantMessageStream } from "assistant-stream";
import { RuntimeAdapterProvider } from "../adapters/RuntimeAdapterProvider.js";
import { jsx } from "react/jsx-runtime";
+
+// PATCH-PACKAGE: Polyfill for useId if not available in React 16
+let useId;
+try {
+ // Try to use React's useId if available
+ useId = require("react").useId;
+} catch (e) {}
+if (!useId) {
+ // Fallback polyfill
+ let globalId = 0;
+ useId = function() {
+ const idRef = useRef();
+ if (!idRef.current) {
+ globalId++;
+ idRef.current = `uid-${globalId}`;
+ }
+ return idRef.current;
+ };
+}
```
5. Run the postinstall script to apply the patches:
```bash
npm run postinstall
# or
yarn postinstall
```
This patch replaces the React import in zustand with the polyfill from `use-sync-external-store/shim` and comments out the `useDebugValue` call which isn't needed.
**Note:** If you're using React 16, you'll also need to follow the instructions for that version.
React 16 \[#react-16]
This section is incomplete and contributions are welcome. If you're using
React 16 and have successfully integrated assistant-ui, please consider
contributing to this documentation.
For React 16 compatibility, you need to apply all the steps for **React 18** and **React 17** above.
Additional Resources \[#additional-resources]
If you encounter any issues with React compatibility, please:
1. Check that all required dependencies are installed
2. Ensure your component modifications are correctly implemented
3. Join our [Discord](https://discord.gg/S9dwgCNEFs) community for direct support
# AI SDK + assistant-ui
URL: /docs/cloud/ai-sdk-assistant-ui
Integrate cloud persistence using assistant-ui runtime and pre-built components.
Overview \[#overview]
This guide shows how to integrate Assistant Cloud with the [AI SDK](https://sdk.vercel.ai/) using assistant-ui's runtime system and pre-built UI components.
What You Get \[#what-you-get]
This integration provides:
* **``** — A complete chat interface with messages, composer, and status indicators
* **``** — A sidebar showing all conversations with auto-generated titles, plus new/delete/manage actions
* **Automatic Persistence** — Messages save as they stream. Threads are created automatically on first message.
* **Runtime Integration** — The assistant-ui runtime handles all cloud synchronization behind the scenes.
How It Works \[#how-it-works]
The `useChatRuntime` hook from `@assistant-ui/react-ai-sdk` wraps AI SDK's `useChat` and adds cloud persistence via the `cloud` parameter. The runtime automatically:
1. Creates a cloud thread on the first user message
2. Persists messages as they complete streaming
3. Generates a conversation title after the assistant's first response
4. Loads historical messages when switching threads via ``
You provide the AI SDK endpoint (`api: "/api/chat"`) and the cloud configuration—everything else is handled.
Prerequisites \[#prerequisites]
You need an assistant-cloud account to follow this guide. [Sign up here](https://cloud.assistant-ui.com/) to get started.
Setup Guide \[#setup-guide]
Create a Cloud Project \[#create-a-cloud-project]
Create a new project in the [assistant-cloud dashboard](https://cloud.assistant-ui.com/) and from the settings page, copy:
* **Frontend API URL**: `https://proj-[ID].assistant-api.com`
* **Assistant Cloud API Key**: `sk_aui_proj_*`
Configure Environment Variables \[#configure-environment-variables]
Add the following environment variables to your project:
```bash title=".env.local"
# Frontend API URL from your cloud project settings
NEXT_PUBLIC_ASSISTANT_BASE_URL=https://proj-[YOUR-ID].assistant-api.com
# API key for server-side operations
ASSISTANT_API_KEY=your-api-key-here
```
Install Dependencies \[#install-dependencies]
Install the required packages:
Set Up the Cloud Runtime \[#set-up-the-cloud-runtime]
Create a client-side AssistantCloud instance and integrate it with your AI SDK runtime:
```tsx title="app/chat/page.tsx"
"use client";
import { AssistantCloud, AssistantRuntimeProvider } from "@assistant-ui/react";
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
import { ThreadList } from "@/components/assistant-ui/thread-list";
import { Thread } from "@/components/assistant-ui/thread";
export default function ChatPage() {
const cloud = new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
anonymous: true, // Creates browser-session based user ID
});
const runtime = useChatRuntime({
cloud,
});
return (
);
}
```
Telemetry \[#telemetry]
The `useChatRuntime` hook captures full run telemetry including timing data. This integrates with the assistant-ui runtime to provide:
**Automatically captured:**
* `status` — `"completed"`, `"incomplete"`, or `"error"`
* `duration_ms` — Total run duration (measured client-side)
* `steps` — Per-step breakdowns with timing, usage, and tool calls
* `tool_calls` — Tool invocations with name, arguments, results, and source
* `total_steps` — Number of reasoning/tool steps
* `output_text` — Full response text (truncated at 50K characters)
**Requires route configuration:**
* `model_id` — The model used
* `input_tokens` / `output_tokens` — Token usage statistics
To capture model and usage data, add the `messageMetadata` callback to your AI SDK route:
```tsx title="app/api/chat/route.ts"
import { streamText } from "ai";
export async function POST(req: Request) {
const result = streamText({
model: openai("gpt-5-mini"),
messages,
});
return result.toUIMessageStreamResponse({
messageMetadata: ({ part }) => {
if (part.type === "finish") {
return {
usage: part.totalUsage,
};
}
if (part.type === "finish-step") {
return {
modelId: part.response.modelId,
};
}
return undefined;
},
});
}
```
Without this configuration, model and token data will be omitted from telemetry reports.
Customizing Reports \[#customizing-reports]
Use the `beforeReport` hook to add custom metadata or filter reports:
```tsx
const cloud = new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
telemetry: {
beforeReport: (report) => ({
...report,
metadata: { userTier: "pro", region: "us-east" },
}),
},
});
```
Return `null` from `beforeReport` to skip reporting a specific run. To disable telemetry entirely, pass `telemetry: false`.
Authentication \[#authentication]
The example above uses anonymous mode (browser session-based user ID) via the env var. For production apps with user accounts, pass an explicit cloud instance:
```tsx
import { useAuth } from "@clerk/nextjs";
import { AssistantCloud } from "@assistant-ui/react";
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
function Chat() {
const { getToken } = useAuth();
const cloud = useMemo(() => new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: async () => getToken({ template: "assistant-ui" }),
}), [getToken]);
const runtime = useChatRuntime({ cloud });
// ...
}
```
See the [Cloud Authorization](/docs/cloud/authorization) guide for other auth providers.
Complete Example \[#complete-example]
Check out the [with-cloud example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud) on GitHub for a fully working implementation.
# AI SDK
URL: /docs/cloud/ai-sdk
Add cloud persistence to your existing AI SDK app with a single hook.
import { InstallCommand } from "@/components/docs/fumadocs/install/install-command";
Overview \[#overview]
The `@assistant-ui/cloud-ai-sdk` package provides a single hook that adds full message and thread persistence to any [AI SDK](https://sdk.vercel.ai/) application:
* **`useCloudChat`** — wraps `useChat` with automatic cloud persistence and built-in thread management
This hook works with any React UI. You keep full control of your components.
See [AI SDK + assistant-ui](/docs/cloud/ai-sdk-assistant-ui) for the full integration with assistant-ui's primitives and runtime.
Prerequisites \[#prerequisites]
You need an assistant-cloud account to follow this guide. [Sign up here](https://cloud.assistant-ui.com/) to get started.
Setup \[#setup]
Create a Cloud Project \[#create-a-cloud-project]
Create a new project in the [assistant-cloud dashboard](https://cloud.assistant-ui.com/) and from the settings page, copy your **Frontend API URL** (`https://proj-[ID].assistant-api.com`).
Configure Environment Variables \[#configure-environment-variables]
```bash title=".env.local"
NEXT_PUBLIC_ASSISTANT_BASE_URL=https://proj-[YOUR-ID].assistant-api.com
```
Install Dependencies \[#install-dependencies]
Integrate \[#integrate]
```tsx title="app/page.tsx"
"use client";
import { useState } from "react";
import { useCloudChat } from "@assistant-ui/cloud-ai-sdk";
export default function Chat() {
// Zero-config: auto-initializes anonymous cloud from env var with built-in threads.
// For custom config, pass: { cloud, threads: useThreads(...), onSyncError }
const { messages, sendMessage, threads } = useCloudChat();
const [input, setInput] = useState("");
const handleSubmit = () => {
if (!input.trim()) return;
sendMessage({ text: input });
setInput("");
};
return (
{/* Thread list */}
{threads.threads.map((t) => (
threads.selectThread(t.id)}>
{t.title || "New conversation"}
))}
threads.selectThread(null)}>New chat
{/* Chat messages */}
{messages.map((m) => (
{m.parts.map((p) => p.type === "text" && p.text)}
))}
{/* Composer */}
);
}
```
That's it. Messages persist automatically as they complete, and switching threads loads the full history.
API Reference \[#api-reference]
useCloudChat(options?) \[#usecloudchatoptions]
Wraps AI SDK's `useChat` with automatic cloud persistence and built-in thread management. Messages are persisted as they finish streaming. Thread creation is automatic on the first message — the hook will auto-create the thread, select it, refresh the thread list, and generate a title after the first response.
Configuration Modes \[#configuration-modes]
**1. Zero-config** — Set `NEXT_PUBLIC_ASSISTANT_BASE_URL` env var, call with no args:
```tsx
const chat = useCloudChat();
```
**2. Custom cloud instance** — For authenticated users or custom configuration:
```tsx
const cloud = new AssistantCloud({ baseUrl, authToken });
const chat = useCloudChat({ cloud });
```
**3. External thread management** — When threads need to be accessed from a separate component or you need custom thread options like `includeArchived`:
```tsx
// In a context provider or parent component
const myThreads = useThreads({ cloud, includeArchived: true });
// Pass to useCloudChat - it will use your thread state
const chat = useCloudChat({ threads: myThreads });
```
Parameters \[#parameters]
| Parameter | Type | Description |
| --------------------- | ------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `options.cloud` | `AssistantCloud` | Cloud instance (optional — auto-creates anonymous instance from `NEXT_PUBLIC_ASSISTANT_BASE_URL` env var if not provided) |
| `options.threads` | `UseThreadsResult` | External thread management from `useThreads()`. Use when you need thread operations in a separate component or custom thread options like `includeArchived` |
| `options.onSyncError` | `(error: Error) => void` | Callback invoked when a sync error occurs |
All other [AI SDK `useChat` options](https://sdk.vercel.ai/docs/reference/ai-sdk-ui/use-chat) are also accepted.
**Returns:** `UseCloudChatResult`
| Value | Type | Description |
| ------------- | -------------------------------------- | ------------------------------------------------------------------ |
| `messages` | `UIMessage[]` | Chat messages (from AI SDK) |
| `status` | `string` | Chat status: `"ready"`, `"submitted"`, `"streaming"`, or `"error"` |
| `sendMessage` | `(message, options?) => Promise` | Send a message (auto-creates thread if needed) |
| `stop` | `() => void` | Stop the current stream |
| `threads` | `UseThreadsResult` | Thread management (see below) |
Plus all other properties from AI SDK's [`UseChatHelpers`](https://sdk.vercel.ai/docs/reference/ai-sdk-ui/use-chat).
**Thread management (`threads`):**
| Value | Type | Description |
| ----------------------- | ------------------------------------------------- | ------------------------------------------------- |
| `threads.threads` | `CloudThread[]` | Active threads sorted by recency |
| `threads.threadId` | `string \| null` | Current thread ID (`null` for a new unsaved chat) |
| `threads.selectThread` | `(id: string \| null) => void` | Switch threads or pass `null` for a new chat |
| `threads.isLoading` | `boolean` | `true` during initial load or refresh |
| `threads.error` | `Error \| null` | Last error, if any |
| `threads.refresh` | `() => Promise` | Re-fetch the thread list |
| `threads.delete` | `(id: string) => Promise` | Delete a thread |
| `threads.rename` | `(id: string, title: string) => Promise` | Rename a thread |
| `threads.archive` | `(id: string) => Promise` | Archive a thread |
| `threads.unarchive` | `(id: string) => Promise` | Unarchive a thread |
| `threads.generateTitle` | `(threadId: string) => Promise` | Generate a title using AI |
useThreads(options) \[#usethreadsoptions]
Thread list management for use with `useCloudChat`. Call this explicitly and pass to `useCloudChat({ threads })` when you need access to thread operations outside the chat context (e.g., in a separate sidebar component).
```tsx
const myThreads = useThreads({ cloud: myCloud });
const { messages, sendMessage } = useCloudChat({ threads: myThreads });
```
**Parameters:**
| Parameter | Type | Description |
| ------------------------- | ---------------- | ------------------------------------------- |
| `options.cloud` | `AssistantCloud` | Cloud client instance |
| `options.includeArchived` | `boolean` | Include archived threads (default: `false`) |
| `options.enabled` | `boolean` | Enable thread fetching (default: `true`) |
**Returns:** `UseThreadsResult` — same shape as `threads` from `useCloudChat()`.
Telemetry \[#telemetry]
The `useCloudChat` hook automatically reports run telemetry to Assistant Cloud after each assistant response. This includes:
**Automatically captured:**
* `status` — `"completed"` or `"incomplete"` based on response content
* `tool_calls` — Tool invocations with name, arguments, results, and source (MCP, frontend, or backend)
* `total_steps` — Number of reasoning/tool steps in the response
* `output_text` — Full response text (truncated at 50K characters)
**Requires route configuration:**
* `model_id` — The model used for the response
* `input_tokens` / `output_tokens` — Token usage statistics
To capture model and usage data, configure the `messageMetadata` callback in your AI SDK route:
```tsx title="app/api/chat/route.ts"
import { streamText } from "ai";
export async function POST(req: Request) {
const result = streamText({
model: openai("gpt-5-mini"),
messages,
});
return result.toUIMessageStreamResponse({
messageMetadata: ({ part }) => {
if (part.type === "finish") {
return {
usage: part.totalUsage,
};
}
if (part.type === "finish-step") {
return {
modelId: part.response.modelId,
};
}
return undefined;
},
});
}
```
The standalone hook does not capture `duration_ms`, per-step breakdowns (`steps`), custom `metadata` pass-through, or `"error"` status. These require the full runtime integration available via [`useChatRuntime`](/docs/cloud/ai-sdk-assistant-ui).
Customizing Reports \[#customizing-reports]
Use the `beforeReport` hook to enrich or filter telemetry:
```tsx
const cloud = new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
telemetry: {
beforeReport: (report) => ({
...report,
metadata: { environment: "production", version: "1.0.0" },
}),
},
});
```
Return `null` from `beforeReport` to skip reporting a specific run. To disable telemetry entirely, pass `telemetry: false`.
Authentication \[#authentication]
The example above uses anonymous mode (browser session-based user ID) via the env var. For production apps with user accounts, pass an explicit cloud instance:
```tsx
import { useAuth } from "@clerk/nextjs";
import { AssistantCloud } from "assistant-cloud";
import { useCloudChat } from "@assistant-ui/cloud-ai-sdk";
function Chat() {
const { getToken } = useAuth();
const cloud = useMemo(() => new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: async () => getToken({ template: "assistant-ui" }),
}), [getToken]);
const { messages, sendMessage, threads } = useCloudChat({ cloud });
// ...
}
```
See the [Cloud Authorization](/docs/cloud/authorization) guide for other auth providers.
Next Steps \[#next-steps]
* If you want pre-built UI components, see [AI SDK + assistant-ui](/docs/cloud/ai-sdk-assistant-ui) for the full integration
* Learn about [user authentication](/docs/cloud/authorization) for multi-user applications
* Check out the [complete example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud-standalone) on GitHub
# User Authorization
URL: /docs/cloud/authorization
Configure workspace auth tokens and integrate with auth providers.
The Assistant Cloud API is accessed directly from your frontend. This eliminates the need for a backend server for most operations—except for authorizing your users.
This guide explains how to set up user authentication and authorization for Assistant Cloud.
Workspaces \[#workspaces]
Authorization is granted to a **workspace**. A workspace is a scope that contains threads and messages. Most commonly:
* Use a `userId` as the workspace for personal chats
* Use `orgId + userId` for organization-scoped conversations
* Use `projectId + userId` for project-based apps
Authentication Approaches \[#authentication-approaches]
Choose the approach that fits your app:
| Approach | Best For | Complexity |
| ------------------------------------ | ------------------------------------------------------------ | ---------- |
| **Direct auth provider integration** | Supported providers (Clerk, Auth0, Supabase, etc.) | Low |
| **Backend server** | Custom auth, multi-user workspaces, or self-hosted solutions | Medium |
| **Anonymous mode** | Demos, prototypes, or testing | None |
Direct Integration with Auth Provider \[#direct-integration-with-auth-provider]
In the Assistant Cloud dashboard, go to **Auth Integrations** and add your provider. This sets up automatic workspace assignment based on the user's ID from your auth provider.
Then pass an `authToken` function that returns your provider's ID token:
```ts
import { AssistantCloud } from "@assistant-ui/react";
const cloud = new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: () => getTokenFromYourProvider(), // Returns JWT
});
```
Backend Server Approach \[#backend-server-approach]
Use this when you need custom workspace logic or unsupported auth providers.
Create an API Key \[#create-an-api-key]
In the Assistant Cloud dashboard, go to **API Keys** and create a key. Add it to your environment:
```bash
ASSISTANT_API_KEY=your_key_here
```
Create the Token Endpoint \[#create-the-token-endpoint]
```ts title="/app/api/assistant-ui-token/route.ts"
import { AssistantCloud } from "assistant-cloud";
import { auth } from "@clerk/nextjs/server"; // Or your auth provider
export const POST = async (req: Request) => {
const { userId, orgId } = await auth();
if (!userId) return new Response("Unauthorized", { status: 401 });
// Define your workspace ID based on your app's structure
const workspaceId = orgId ? `${orgId}_${userId}` : userId;
const assistantCloud = new AssistantCloud({
apiKey: process.env.ASSISTANT_API_KEY!,
userId,
workspaceId,
});
const { token } = await assistantCloud.auth.tokens.create();
return new Response(token);
};
```
Use the Token on the Frontend \[#use-the-token-on-the-frontend]
```tsx title="app/chat/page.tsx"
const cloud = new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: () =>
fetch("/api/assistant-ui-token", { method: "POST" }).then((r) => r.text()),
});
const runtime = useChatRuntime({
cloud,
});
```
Anonymous Mode (No Auth) \[#anonymous-mode-no-auth]
For demos or testing, use anonymous mode to create browser-session-based users:
```tsx
import { AssistantCloud } from "@assistant-ui/react";
const cloud = new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
anonymous: true,
});
```
Anonymous mode creates a new user for each browser session. Threads won't persist across sessions or devices. Use this only for prototyping.
Auth Provider Examples \[#auth-provider-examples]
Clerk \[#clerk]
Configure the JWT Template \[#configure-the-jwt-template]
In the Clerk dashboard, go to **Configure → JWT Templates**. Create a new blank template named "assistant-ui":
```json
{
"aud": "assistant-ui"
}
```
The `aud` claim ensures the JWT is only valid for Assistant Cloud.
Note the **Issuer** and **JWKS Endpoint** values.
Add Auth Integration in Assistant Cloud \[#add-auth-integration-in-assistant-cloud]
In the Assistant Cloud dashboard, go to **Auth Rules** and create a new rule:
* **Provider**: Clerk
* **Issuer**: Paste from Clerk JWT Template
* **JWKS Endpoint**: Paste from Clerk JWT Template
* **Audience**: `assistant-ui`
Use in Your App \[#use-in-your-app]
```tsx
import { useAuth } from "@clerk/nextjs";
import { AssistantCloud } from "@assistant-ui/react";
function Chat() {
const { getToken } = useAuth();
const cloud = useMemo(
() =>
new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: () => getToken({ template: "assistant-ui" }),
}),
[getToken],
);
// Use with your runtime...
}
```
Auth0 \[#auth0]
```tsx
import { useAuth0 } from "@auth0/auth0-react";
import { AssistantCloud } from "@assistant-ui/react";
function Chat() {
const { getAccessTokenSilently } = useAuth0();
const cloud = useMemo(
() =>
new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: () => getAccessTokenSilently(),
}),
[getAccessTokenSilently],
);
// Use with your runtime...
}
```
Configure the Auth0 integration in the Assistant Cloud dashboard with your Auth0 domain and audience.
Supabase Auth \[#supabase-auth]
```tsx
import { useSupabaseClient } from "@supabase/auth-helpers-react";
import { AssistantCloud } from "@assistant-ui/react";
function Chat() {
const supabase = useSupabaseClient();
const cloud = useMemo(
() =>
new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: async () => {
const { data } = await supabase.auth.getSession();
return data.session?.access_token ?? "";
},
}),
[supabase],
);
// Use with your runtime...
}
```
Firebase Auth \[#firebase-auth]
```tsx
import { getAuth, getIdToken } from "firebase/auth";
import { AssistantCloud } from "@assistant-ui/react";
function Chat() {
const cloud = useMemo(() => {
const auth = getAuth();
return new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: () => getIdToken(auth.currentUser!, true),
});
}, []);
// Use with your runtime...
}
```
# LangGraph + assistant-ui
URL: /docs/cloud/langgraph
Integrate cloud persistence and thread management with LangGraph Cloud.
Overview \[#overview]
This guide shows how to integrate Assistant Cloud with [LangGraph Cloud](https://langchain-ai.github.io/langgraph/cloud/) using assistant-ui's runtime system and pre-built UI components.
Prerequisites \[#prerequisites]
You need an assistant-cloud account to follow this guide. [Sign up
here](https://cloud.assistant-ui.com/) to get started.
Setup Guide \[#setup-guide]
Create a Cloud Project \[#create-a-cloud-project]
Create a new project in the [assistant-cloud dashboard](https://cloud.assistant-ui.com/) and from the settings page, copy:
* **Frontend API URL**: `https://proj-[ID].assistant-api.com`
* **API Key**: For server-side operations
Configure Environment Variables \[#configure-environment-variables]
Add the following environment variables to your project:
```bash title=".env.local"
# Frontend API URL from your cloud project settings
NEXT_PUBLIC_ASSISTANT_BASE_URL=https://proj-[YOUR-ID].assistant-api.com
# API key for server-side operations
ASSISTANT_API_KEY=your-api-key-here
```
Install Dependencies \[#install-dependencies]
Install the required packages:
Create the Runtime Provider \[#create-the-runtime-provider]
Create a runtime provider that integrates LangGraph with assistant-cloud. Choose between anonymous mode for demos/prototypes or authenticated mode for production:
```tsx title="app/chat/runtime-provider.tsx"
"use client";
import {
AssistantCloud,
AssistantRuntimeProvider,
} from "@assistant-ui/react";
import { useLangGraphRuntime } from "@assistant-ui/react-langgraph";
import { createThread, getThreadState, sendMessage } from "@/lib/chatApi";
import { LangChainMessage } from "@assistant-ui/react-langgraph";
import { useMemo } from "react";
export function MyRuntimeProvider({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
const cloud = useMemo(
() =>
new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
anonymous: true, // Creates browser session-based user ID
}),
[],
);
const runtime = useLangGraphRuntime({
cloud,
stream: async function* (messages, { initialize }) {
const { externalId } = await initialize();
if (!externalId) throw new Error("Thread not found");
return sendMessage({
threadId: externalId,
messages,
});
},
create: async () => {
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
const state = await getThreadState(externalId);
return {
messages:
(state.values as { messages?: LangChainMessage[] }).messages ?? [],
};
},
});
return (
{children}
);
}
```
```tsx title="app/chat/runtime-provider.tsx"
"use client";
import {
AssistantCloud,
AssistantRuntimeProvider,
} from "@assistant-ui/react";
import { useLangGraphRuntime } from "@assistant-ui/react-langgraph";
import { createThread, getThreadState, sendMessage } from "@/lib/chatApi";
import { LangChainMessage } from "@assistant-ui/react-langgraph";
import { useAuth } from "@clerk/nextjs";
import { useMemo } from "react";
export function MyRuntimeProvider({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
const { getToken } = useAuth();
const cloud = useMemo(
() =>
new AssistantCloud({
baseUrl: process.env.NEXT_PUBLIC_ASSISTANT_BASE_URL!,
authToken: async () => getToken({ template: "assistant-ui" }),
}),
[getToken],
);
const runtime = useLangGraphRuntime({
cloud,
stream: async function* (messages, { initialize }) {
const { externalId } = await initialize();
if (!externalId) throw new Error("Thread not found");
return sendMessage({
threadId: externalId,
messages,
});
},
create: async () => {
const { thread_id } = await createThread();
return { externalId: thread_id };
},
load: async (externalId) => {
const state = await getThreadState(externalId);
return {
messages:
(state.values as { messages?: LangChainMessage[] }).messages ?? [],
};
},
});
return (
{children}
);
}
```
For Clerk authentication, configure the `"assistant-ui"` token template in
your Clerk dashboard.
The `useLangGraphRuntime` hook now directly accepts `cloud`, `create`, and `load` parameters for simplified thread management. The runtime handles thread lifecycle internally.
Add Thread UI Components \[#add-thread-ui-components]
Install the thread list component:
```sh
npx assistant-ui@latest add thread-list
```
```sh
npx shadcn@latest add https://r.assistant-ui.com/thread-list.json
```
```sh
npx shadcn@latest add https://r.assistant-ui.com/thread-list.json
```
Then add it to your application layout:
```tsx title="app/chat/page.tsx"
import { Thread } from "@/components/assistant-ui/thread";
import { ThreadList } from "@/components/assistant-ui/thread-list";
export default function ChatPage() {
return (
);
}
```
Authentication \[#authentication]
The examples above show two authentication modes:
* **Anonymous**: Suitable for demos and prototypes. Creates a browser session-based user ID.
* **Authenticated**: For production use with user accounts. The authenticated example uses [Clerk](https://clerk.com/), but you can integrate any auth provider.
For other authentication providers or custom implementations, see the [Cloud Authorization](/docs/cloud/authorization) guide.
Next Steps \[#next-steps]
* Learn about [LangGraph runtime setup](/docs/runtimes/langgraph) for your application
* Explore [ThreadListRuntime](/docs/api-reference/runtimes/thread-list-runtime) for advanced thread management
* Check out the [LangGraph example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-langgraph) on GitHub
# Overview
URL: /docs/cloud/overview
Add thread persistence and chat history to your AI app in minutes.
Assistant Cloud is a hosted service that adds thread management, message persistence, and user authorization to AI chat applications. It works with any React UI—whether you use assistant-ui components or your own.
What You Get \[#what-you-get]
* **Thread Persistence** — Messages automatically save as conversations progress. Users can resume chats at any time, even across sessions.
* **Thread List** — Built-in UI components (or hooks) for browsing, switching, and managing conversations.
* **Auto-Generated Titles** — Conversations get meaningful titles based on the initial messages.
* **User Authorization** — Integrates with your auth provider to scope threads per user or workspace.
Supported Backends \[#supported-backends]
| Backend | Standalone Mode | With assistant-ui |
| --------- | ------------------------------------ | --------------------------------------------------- |
| AI SDK | [`useCloudChat`](/docs/cloud/ai-sdk) | [`useChatRuntime`](/docs/cloud/ai-sdk-assistant-ui) |
| LangGraph | — | [`useLangGraphRuntime`](/docs/cloud/langgraph) |
| Custom | — | Local Runtime |
Getting Started \[#getting-started]
You'll need an Assistant Cloud account. [Sign up here](https://cloud.assistant-ui.com/) to create a project.
From your project settings, copy the **Frontend API URL** (`https://proj-[ID].assistant-api.com`)—you'll need it for the guides below.
Choose Your Integration Path \[#choose-your-integration-path]
**Using AI SDK with your own UI?**\
→ Follow the [AI SDK guide](/docs/cloud/ai-sdk) for the `useCloudChat` hook. One hook, minimal changes to your existing code.
**Want pre-built components like `` and ``?**\
→ Follow the [AI SDK + assistant-ui guide](/docs/cloud/ai-sdk-assistant-ui) for full UI components with cloud persistence.
**Using LangGraph?**\
→ Follow the [LangGraph guide](/docs/cloud/langgraph) for cloud-backed LangGraph Cloud integration.
Example Repositories \[#example-repositories]
* [AI SDK (standalone)](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud-standalone) — Minimal `useCloudChat` setup
* [AI SDK + assistant-ui](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud) — Full component integration
* [LangGraph + assistant-ui](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-langgraph) — LangGraph Cloud with persistence
# Assistant Transport
URL: /docs/runtimes/assistant-transport
Stream agent state to the frontend and handle user commands for custom agents.
If you've built an agent as a Python or TypeScript script and want to add a UI to it, you need to solve two problems: streaming updates to the frontend and integrating with the UI framework. Assistant Transport handles both.
Assistant Transport streams your agent's complete state to the frontend in real-time. Unlike traditional approaches that only stream predefined message types (like text or tool calls), it streams your entire agent state—whatever structure your agent uses internally.
It consists of:
* **State streaming**: Efficiently streams updates to your agent state (supports any JSON object)
* **UI integration**: Converts your agent's state into assistant-ui components that render in the browser
* **Command handling**: Sends user actions (messages, tool executions, custom commands) back to your agent
When to Use Assistant Transport \[#when-to-use-assistant-transport]
Use Assistant Transport when:
* You don't have a streaming protocol yet and need one
* You want your agent's native state to be directly accessible in the frontend
* You're building a custom agent framework or one without a streaming protocol (e.g. OSS LangGraph)
Mental Model \[#mental-model]
The frontend receives state snapshots and converts them to React components. The goal is to have the UI be a stateless view on top of the agent framework state.
The agent server receives commands from the frontend. When a user interacts with the UI (sends a message, clicks a button, etc.), the frontend queues a command and sends it to the backend. Assistant Transport defines standard commands like `add-message` and `add-tool-result`, and you can define custom commands.
Command Lifecycle \[#command-lifecycle]
Commands go through the following lifecycle:
The runtime alternates between **idle** (no active backend request) and **sending** (request in flight). When a new command is created while idle, it's immediately sent. Otherwise, it's queued until the current request completes.
To implement this architecture, you need to build 2 pieces:
1. **Backend endpoint** on the agent server that accepts commands and returns a stream of state snapshots
2. **Frontend-side [state converter](#state-converter)** that converts state snapshots to assistant-ui's data format so that the UI primitives work
Building a Backend Endpoint \[#building-a-backend-endpoint]
Let's build the backend endpoint step by step. You'll need to handle incoming commands, update your agent state, and stream the updates back to the frontend.
The backend endpoint receives POST requests with the following payload:
```typescript
{
state: T, // The previous state that the frontend has access to
commands: AssistantTransportCommand[],
system?: string,
tools?: ToolDefinition[],
threadId: string // The current thread/conversation identifier
}
```
The backend endpoint returns a stream of state snapshots using the `assistant-stream` library ([npm](https://www.npmjs.com/package/assistant-stream) / [PyPI](https://pypi.org/project/assistant-stream/)).
Handling Commands \[#handling-commands]
The backend endpoint processes commands from the `commands` array:
```python
for command in request.commands:
if command.type == "add-message":
# Handle adding a user message
elif command.type == "add-tool-result":
# Handle tool execution result
elif command.type == "my-custom-command":
# Handle your custom command
```
Streaming Updates \[#streaming-updates]
To stream state updates, modify `controller.state` within your run callback:
```python
from assistant_stream import RunController, create_run
from assistant_stream.serialization import DataStreamResponse
@app.post("/assistant")
async def chat_endpoint(request: ChatRequest):
async def run_callback(controller: RunController):
# Emits "set" at path ["message"] with value "Hello"
controller.state["message"] = "Hello"
# Emits "append-text" at path ["message"] with value " World"
controller.state["message"] += " World"
# Create and return the stream
stream = create_run(run_callback, state=request.state)
return DataStreamResponse(stream)
```
The state snapshots are automatically streamed to the frontend using the operations described in [Streaming Protocol](#streaming-protocol).
> **Cancellation:** `create_run` exposes `controller.is_cancelled` and `controller.cancelled_event`.
> If the response stream is closed early (for example user cancel or client disconnect),
> these are set so your backend loop can exit cooperatively.
> `controller.cancelled_event` is a read-only signal object with `wait()` and `is_set()`.
> `create_run` gives callbacks a \~50ms cooperative shutdown window before forced task cancellation.
> Callback exceptions that happen during early-close cleanup are not re-raised to the stream consumer,
> but are logged with traceback at warning level for debugging.
> Put critical cleanup in `finally` blocks, since forced cancellation may happen after the grace window.
>
> ```python
> async def run_callback(controller: RunController):
> while not controller.is_cancelled:
> # Long-running work / model loop
> await asyncio.sleep(0.05)
> ```
>
> ```python
> async def run_callback(controller: RunController):
> await controller.cancelled_event.wait()
> # cancellation-aware shutdown path
> ```
Backend Reference Implementation \[#backend-reference-implementation]
```python
from assistant_stream import RunController, create_run
from assistant_stream.serialization import DataStreamResponse
async def run_callback(controller: RunController):
# Initialize state
if controller.state is None:
controller.state = {}
# Process commands
for command in request.commands:
# Handle commands...
# Run your agent and stream updates
async for event in agent.stream():
# update controller.state
pass
# Create and return the stream
stream = create_run(run_callback, state=request.state)
return DataStreamResponse(stream)
```
```python
from assistant_stream.serialization import DataStreamResponse
from assistant_stream import RunController, create_run
@app.post("/assistant")
async def chat_endpoint(request: ChatRequest):
"""Chat endpoint with custom agent streaming."""
async def run_callback(controller: RunController):
# Initialize controller state
if controller.state is None:
controller.state = {"messages": []}
# Process commands
for command in request.commands:
if command.type == "add-message":
# Add message to messages array
controller.state["messages"].append(command.message)
# Run your custom agent and stream updates
async for message in your_agent.stream():
# Push message to messages array
controller.state["messages"].append(message)
# Create streaming response
stream = create_run(run_callback, state=request.state)
return DataStreamResponse(stream)
```
```python
from assistant_stream.serialization import DataStreamResponse
from assistant_stream import RunController, create_run
from assistant_stream.modules.langgraph import append_langgraph_event
@app.post("/assistant")
async def chat_endpoint(request: ChatRequest):
"""Chat endpoint using LangGraph with streaming."""
async def run_callback(controller: RunController):
# Initialize controller state
if controller.state is None:
controller.state = {}
if "messages" not in controller.state:
controller.state["messages"] = []
input_messages = []
# Process commands
for command in request.commands:
if command.type == "add-message":
text_parts = [
part.text for part in command.message.parts
if part.type == "text" and part.text
]
if text_parts:
input_messages.append(HumanMessage(content=" ".join(text_parts)))
# Create initial state for LangGraph
input_state = {"messages": input_messages}
# Stream events from LangGraph
async for namespace, event_type, chunk in graph.astream(
input_state,
stream_mode=["messages", "updates"],
subgraphs=True
):
append_langgraph_event(
controller.state,
namespace,
event_type,
chunk
)
# Create streaming response
stream = create_run(run_callback, state=request.state)
return DataStreamResponse(stream)
```
Full example: [`python/assistant-transport-backend-langgraph`](https://github.com/assistant-ui/assistant-ui/tree/main/python/assistant-transport-backend-langgraph)
Streaming Protocol \[#streaming-protocol]
The assistant-stream state replication protocol allows for streaming updates to an arbitrary JSON object.
Operations \[#operations]
The protocol supports two operations:
> **Note:** We've found that these two operations are enough to handle all sorts of complex state operations efficiently. `set` handles value updates and nested structures, while `append-text` enables efficient streaming of text content.
set \[#set]
Sets a value at a specific path in the JSON object.
```json
// Operation
{ "type": "set", "path": ["status"], "value": "completed" }
// Before
{ "status": "pending" }
// After
{ "status": "completed" }
```
append-text \[#append-text]
Appends text to an existing string value at a path.
```json
// Operation
{ "type": "append-text", "path": ["message"], "value": " World" }
// Before
{ "message": "Hello" }
// After
{ "message": "Hello World" }
```
Wire Format \[#wire-format]
The wire format will be migrated to Server-Sent Events (SSE) in a future
release.
The wire format is inspired by [AI SDK's data stream protocol](https://sdk.vercel.ai/docs/ai-sdk-ui/stream-protocol).
**State Update:**
```
aui-state:ObjectStreamOperation[]
```
```
aui-state:[{"type":"set","path":["status"],"value":"completed"}]
```
**Error:**
```
3:string
```
```
3:"error message"
```
Building a Frontend \[#building-a-frontend]
Now let's set up the frontend. The state converter is the heart of the integration—it transforms your agent's state into the format assistant-ui expects.
The `useAssistantTransportRuntime` hook is used to configure the runtime. It accepts the following config:
```typescript
{
initialState: T,
api: string,
resumeApi?: string,
converter: (state: T, connectionMetadata: ConnectionMetadata) => AssistantTransportState,
headers?: Record | (() => Promise>),
body?: object,
prepareSendCommandsRequest?: (body: SendCommandsRequestBody) => Record | Promise>,
capabilities?: { edit?: boolean },
onResponse?: (response: Response) => void,
onFinish?: () => void,
onError?: (error: Error) => void,
onCancel?: () => void
}
```
State Converter \[#state-converter]
The state converter is the core of your frontend integration. It transforms your agent's state into assistant-ui's message format.
```typescript
(
state: T, // Your agent's state
connectionMetadata: {
pendingCommands: Command[], // Commands not yet sent to backend
isSending: boolean // Whether a request is in flight
}
) => {
messages: ThreadMessage[], // Messages to display
isRunning: boolean // Whether the agent is running
}
```
Converting Messages \[#converting-messages]
Use the `createMessageConverter` API to transform your agent's messages to assistant-ui format:
```typescript
import { unstable_createMessageConverter as createMessageConverter } from "@assistant-ui/react";
// Define your message type
type YourMessageType = {
id: string;
role: "user" | "assistant";
content: string;
timestamp: number;
};
// Define a converter function for a single message
const exampleMessageConverter = (message: YourMessageType) => {
// Transform a single message to assistant-ui format
return {
role: message.role,
content: [{ type: "text", text: message.content }]
};
};
const messageConverter = createMessageConverter(exampleMessageConverter);
const converter = (state: YourAgentState) => {
return {
messages: messageConverter.toThreadMessages(state.messages),
isRunning: false
};
};
```
```typescript
import { unstable_createMessageConverter as createMessageConverter } from "@assistant-ui/react";
import { convertLangChainMessages } from "@assistant-ui/react-langgraph";
const messageConverter = createMessageConverter(convertLangChainMessages);
const converter = (state: YourAgentState) => {
return {
messages: messageConverter.toThreadMessages(state.messages),
isRunning: false
};
};
```
**Reverse mapping:**
The message converter allows you to retrieve the original message format anywhere inside assistant-ui. This lets you access your agent's native message structure from any assistant-ui component:
```typescript
// Get original message(s) from a ThreadMessage anywhere in assistant-ui
const originalMessage = messageConverter.toOriginalMessage(threadMessage);
```
Optimistic Updates from Commands \[#optimistic-updates-from-commands]
The converter also receives `connectionMetadata` which contains pending commands. Use this to show optimistic updates:
```typescript
const converter = (state: State, connectionMetadata: ConnectionMetadata) => {
// Extract pending messages from commands
const optimisticMessages = connectionMetadata.pendingCommands
.filter((c) => c.type === "add-message")
.map((c) => c.message);
return {
messages: [...state.messages, ...optimisticMessages],
isRunning: connectionMetadata.isSending || false
};
};
```
Handling Errors and Cancellations \[#handling-errors-and-cancellations]
The `onError` and `onCancel` callbacks receive an `updateState` function that allows you to update the agent state on the client side without making a server request:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
onError: (error, { commands, updateState }) => {
console.error("Error occurred:", error);
console.log("Commands in transit:", commands);
// Update state to reflect the error
updateState((currentState) => ({
...currentState,
lastError: error.message,
}));
},
onCancel: ({ commands, updateState }) => {
console.log("Request cancelled");
console.log("Commands in transit or queued:", commands);
// Update state to reflect cancellation
updateState((currentState) => ({
...currentState,
status: "cancelled",
}));
},
});
```
> **Note:** `onError` receives commands that were in transit, while `onCancel` receives both in-transit and queued commands.
Custom Headers and Body \[#custom-headers-and-body]
You can pass custom headers and body to the backend endpoint:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
headers: {
"Authorization": "Bearer token",
"X-Custom-Header": "value",
},
body: {
customField: "value",
},
});
```
Dynamic Headers and Body \[#dynamic-headers-and-body]
You can also evaluate the header and body payloads on every request by passing an async function:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
headers: async () => ({
"Authorization": `Bearer ${await getAccessToken()}`,
"X-Request-ID": crypto.randomUUID(),
}),
body: async () => ({
customField: "value",
requestId: crypto.randomUUID(),
timestamp: Date.now(),
}),
});
```
Transforming the Request Body \[#transforming-the-request-body]
Use `prepareSendCommandsRequest` to transform the entire request body before it is sent to the backend. This receives the fully assembled body object and returns the (potentially transformed) body.
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
prepareSendCommandsRequest: (body) => ({
...body,
trackingId: crypto.randomUUID(),
}),
});
```
This is useful for adding tracking IDs, transforming commands, or injecting metadata that depends on the assembled request:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
prepareSendCommandsRequest: (body) => ({
...body,
commands: body.commands.map((cmd) =>
cmd.type === "add-message"
? { ...cmd, trackingId: crypto.randomUUID() }
: cmd,
),
}),
});
```
Editing Messages \[#editing-messages]
By default, editing messages is disabled. To enable it, set `capabilities.edit` to `true`:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
capabilities: {
edit: true,
},
});
```
`add-message` commands always include `parentId` and `sourceId` fields:
```typescript
{
type: "add-message",
message: { role: "user", parts: [...] },
parentId: "msg-3", // The message after which this message should be inserted
sourceId: "msg-4", // The ID of the message being replaced (null for new messages)
}
```
Backend Handling \[#backend-handling]
When the backend receives an `add-message` command with a `parentId`, it should:
1. Truncate all messages after the message with `parentId`
2. Append the new message
3. Stream the updated state back to the frontend
```python
for command in request.commands:
if command.type == "add-message":
if hasattr(command, "parentId") and command.parentId is not None:
# Find the parent message index and truncate
parent_idx = next(
i for i, m in enumerate(messages) if m.id == command.parentId
)
messages = messages[:parent_idx + 1]
# Append the new message
messages.append(command.message)
```
`parentId` and `sourceId` are always included on `add-message` commands. For new messages, `sourceId` will be `null`.
Resuming from a Sync Server \[#resuming-from-a-sync-server]
We provide a sync server currently only as part of the enterprise plan. Please
contact us for more information.
To enable resumability, you need to:
1. Pass a `resumeApi` URL to `useAssistantTransportRuntime` that points to your sync server
2. Use the `unstable_resumeRun` API to resume a conversation
```typescript
import { useAui } from "@assistant-ui/react";
const runtime = useAssistantTransportRuntime({
// ... other options
api: "http://localhost:8010/assistant",
resumeApi: "http://localhost:8010/resume", // Sync server endpoint
// ... other options
});
// Typically called on thread switch or mount to check if sync server has anything to resume
const aui = useAui();
aui.thread().unstable_resumeRun({
parentId: null, // Ignored (will be removed in a future version)
});
```
Accessing Runtime State \[#accessing-runtime-state]
Use the `useAssistantTransportState` hook to access the current agent state from any component:
```typescript
import { useAssistantTransportState } from "@assistant-ui/react";
function MyComponent() {
const state = useAssistantTransportState();
return
{JSON.stringify(state)}
;
}
```
You can also pass a selector function to extract specific values:
```typescript
function MyComponent() {
const messages = useAssistantTransportState((state) => state.messages);
return
Message count: {messages.length}
;
}
```
Type Safety \[#type-safety]
Use module augmentation to add types for your agent state:
```typescript title="assistant.config.ts"
import "@assistant-ui/react";
declare module "@assistant-ui/react" {
namespace Assistant {
interface ExternalState {
myState: {
messages: Message[];
customField: string;
};
}
}
}
```
> **Note:** Place this file anywhere in your project (e.g., `src/assistant.config.ts` or at the project root). TypeScript will automatically pick up the type augmentation through module resolution—you don't need to import this file anywhere.
After adding the type augmentation, `useAssistantTransportState` will be fully typed:
```typescript
function MyComponent() {
// TypeScript knows about your custom fields
const customField = useAssistantTransportState((state) => state.customField);
return
{customField}
;
}
```
Accessing the Original Message \[#accessing-the-original-message]
If you're using `createMessageConverter`, you can access the original message format from any assistant-ui component using the converter's `toOriginalMessage` method:
```typescript
import { unstable_createMessageConverter as createMessageConverter } from "@assistant-ui/react";
import { useMessage } from "@assistant-ui/react";
const messageConverter = createMessageConverter(yourMessageConverter);
function MyMessageComponent() {
const message = useMessage();
// Get the original message(s) from the converted ThreadMessage
const originalMessage = messageConverter.toOriginalMessage(message);
// Access your agent's native message structure
return
{originalMessage.yourCustomField}
;
}
```
You can also use `toOriginalMessages` to get all original messages when a ThreadMessage was created from multiple source messages:
```typescript
const originalMessages = messageConverter.toOriginalMessages(message);
```
Frontend Reference Implementation \[#frontend-reference-implementation]
```tsx
"use client";
import {
AssistantRuntimeProvider,
AssistantTransportConnectionMetadata,
useAssistantTransportRuntime,
} from "@assistant-ui/react";
type State = {
messages: Message[];
};
// Converter function: transforms agent state to assistant-ui format
const converter = (
state: State,
connectionMetadata: AssistantTransportConnectionMetadata,
) => {
// Add optimistic updates for pending commands
const optimisticMessages = connectionMetadata.pendingCommands
.filter((c) => c.type === "add-message")
.map((c) => c.message);
return {
messages: [...state.messages, ...optimisticMessages],
isRunning: connectionMetadata.isSending || false,
};
};
export function MyRuntimeProvider({ children }) {
const runtime = useAssistantTransportRuntime({
initialState: {
messages: [],
},
api: "http://localhost:8010/assistant",
converter,
headers: async () => ({
"Authorization": "Bearer token",
}),
body: {
"custom-field": "custom-value",
},
onResponse: (response) => {
console.log("Response received from server");
},
onFinish: () => {
console.log("Conversation completed");
},
onError: (error, { commands, updateState }) => {
console.error("Assistant transport error:", error);
console.log("Commands in transit:", commands);
},
onCancel: ({ commands, updateState }) => {
console.log("Request cancelled");
console.log("Commands in transit or queued:", commands);
},
});
return (
{children}
);
}
```
```tsx
"use client";
import {
AssistantRuntimeProvider,
AssistantTransportConnectionMetadata,
unstable_createMessageConverter as createMessageConverter,
useAssistantTransportRuntime,
} from "@assistant-ui/react";
import {
convertLangChainMessages,
LangChainMessage,
} from "@assistant-ui/react-langgraph";
type State = {
messages: LangChainMessage[];
};
const LangChainMessageConverter = createMessageConverter(
convertLangChainMessages,
);
// Converter function: transforms agent state to assistant-ui format
const converter = (
state: State,
connectionMetadata: AssistantTransportConnectionMetadata,
) => {
// Add optimistic updates for pending commands
const optimisticStateMessages = connectionMetadata.pendingCommands.map(
(c): LangChainMessage[] => {
if (c.type === "add-message") {
return [
{
type: "human" as const,
content: [
{
type: "text" as const,
text: c.message.parts
.map((p) => (p.type === "text" ? p.text : ""))
.join("\n"),
},
],
},
];
}
return [];
},
);
const messages = [...state.messages, ...optimisticStateMessages.flat()];
return {
messages: LangChainMessageConverter.toThreadMessages(messages),
isRunning: connectionMetadata.isSending || false,
};
};
export function MyRuntimeProvider({ children }) {
const runtime = useAssistantTransportRuntime({
initialState: {
messages: [],
},
api: "http://localhost:8010/assistant",
converter,
headers: async () => ({
"Authorization": "Bearer token",
}),
body: {
"custom-field": "custom-value",
},
onResponse: (response) => {
console.log("Response received from server");
},
onFinish: () => {
console.log("Conversation completed");
},
onError: (error, { commands, updateState }) => {
console.error("Assistant transport error:", error);
console.log("Commands in transit:", commands);
},
onCancel: ({ commands, updateState }) => {
console.log("Request cancelled");
console.log("Commands in transit or queued:", commands);
},
});
return (
{children}
);
}
```
Full example: [`examples/with-assistant-transport`](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-assistant-transport)
Custom Commands \[#custom-commands]
Defining Custom Commands \[#defining-custom-commands]
Use module augmentation to define a custom command:
```typescript title="assistant.config.ts"
import "@assistant-ui/react";
declare module "@assistant-ui/react" {
namespace Assistant {
interface Commands {
myCustomCommand: {
type: "my-custom-command";
data: string;
};
}
}
}
```
Issuing Commands \[#issuing-commands]
Use the `useAssistantTransportSendCommand` hook to send custom commands:
```typescript
import { useAssistantTransportSendCommand } from "@assistant-ui/react";
function MyComponent() {
const sendCommand = useAssistantTransportSendCommand();
const handleClick = () => {
sendCommand({
type: "my-custom-command",
data: "Hello, world!",
});
};
return ;
}
```
Backend Integration \[#backend-integration]
The backend receives custom commands in the `commands` array, just like built-in commands:
```python
for command in request.commands:
if command.type == "add-message":
# Handle add-message command
elif command.type == "add-tool-result":
# Handle add-tool-result command
elif command.type == "my-custom-command":
# Handle your custom command
data = command.data
```
Optimistic Updates \[#optimistic-updates]
Update the [state converter](#state-converter) to optimistically handle the custom command:
```typescript
const converter = (state: State, connectionMetadata: ConnectionMetadata) => {
// Filter custom commands from pending commands
const customCommands = connectionMetadata.pendingCommands.filter(
(c) => c.type === "my-custom-command"
);
// Apply optimistic updates based on custom commands
const optimisticState = {
...state,
customData: customCommands.map((c) => c.data),
};
return {
messages: state.messages,
state: optimisticState,
isRunning: connectionMetadata.isSending || false,
};
};
```
Cancellation and Error Behavior \[#cancellation-and-error-behavior]
Custom commands follow the same lifecycle as built-in commands. You can update your `onError` and `onCancel` handlers to take custom commands into account:
```typescript
const runtime = useAssistantTransportRuntime({
// ... other options
onError: (error, { commands, updateState }) => {
// Check if any custom commands were in transit
const customCommands = commands.filter((c) => c.type === "my-custom-command");
if (customCommands.length > 0) {
// Handle custom command errors
updateState((state) => ({
...state,
customCommandFailed: true,
}));
}
},
onCancel: ({ commands, updateState }) => {
// Check if any custom commands were queued or in transit
const customCommands = commands.filter((c) => c.type === "my-custom-command");
if (customCommands.length > 0) {
// Handle custom command cancellation
updateState((state) => ({
...state,
customCommandCancelled: true,
}));
}
},
});
```
# Data Stream Protocol
URL: /docs/runtimes/data-stream
Integration with data stream protocol endpoints for streaming AI responses.
The `@assistant-ui/react-data-stream` package provides integration with data stream protocol endpoints, enabling streaming AI responses with tool support and state management.
Overview \[#overview]
The data stream protocol is a standardized format for streaming AI responses that supports:
* **Streaming text responses** with real-time updates
* **Tool calling** with structured parameters and results
* **State management** for conversation context
* **Error handling** and cancellation support
* **Attachment support** for multimodal interactions
Installation \[#installation]
Basic Usage \[#basic-usage]
Set up the Runtime \[#set-up-the-runtime]
Use `useDataStreamRuntime` to connect to your data stream endpoint:
```tsx title="app/page.tsx"
"use client";
import { useDataStreamRuntime } from "@assistant-ui/react-data-stream";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { Thread } from "@/components/assistant-ui/thread";
export default function ChatPage() {
const runtime = useDataStreamRuntime({
api: "/api/chat",
});
return (
);
}
```
Create Backend Endpoint \[#create-backend-endpoint]
Your backend endpoint should accept POST requests and return data stream responses:
```typescript title="app/api/chat/route.ts"
import { createAssistantStreamResponse } from "assistant-stream";
export async function POST(request: Request) {
const { messages, tools, system, threadId } = await request.json();
return createAssistantStreamResponse(async (controller) => {
// Process the request with your AI provider
const stream = await processWithAI({
messages,
tools,
system,
});
// Stream the response
for await (const chunk of stream) {
controller.appendText(chunk.text);
}
});
}
```
The request body includes:
* `messages` - The conversation history
* `tools` - Available tool definitions
* `system` - System prompt (if configured)
* `threadId` - The current thread/conversation identifier
Advanced Configuration \[#advanced-configuration]
Custom Headers and Authentication \[#custom-headers-and-authentication]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: {
"Authorization": "Bearer " + token,
"X-Custom-Header": "value",
},
credentials: "include",
});
```
Dynamic Headers \[#dynamic-headers]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: async () => {
const token = await getAuthToken();
return {
"Authorization": "Bearer " + token,
};
},
});
```
Dynamic Body \[#dynamic-body]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: async () => ({
"Authorization": `Bearer ${await getAuthToken()}`,
}),
body: async () => ({
requestId: crypto.randomUUID(),
timestamp: Date.now(),
signature: await computeSignature(),
}),
});
```
Event Callbacks \[#event-callbacks]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
onResponse: (response) => {
console.log("Response received:", response.status);
},
onFinish: (message) => {
console.log("Message completed:", message);
},
onError: (error) => {
console.error("Error occurred:", error);
},
onCancel: () => {
console.log("Request cancelled");
},
});
```
Tool Integration \[#tool-integration]
Human-in-the-loop tools (using `human()` for tool interrupts) are not supported
in the data stream runtime. If you need human approval workflows or interactive
tool UIs, consider using [LocalRuntime](/docs/runtimes/custom/local) or
[Assistant Cloud](/docs/cloud/overview) instead.
Frontend Tools \[#frontend-tools]
Use the `frontendTools` helper to serialize client-side tools:
```tsx
import { frontendTools } from "@assistant-ui/react-data-stream";
import { makeAssistantTool } from "@assistant-ui/react";
const weatherTool = makeAssistantTool({
toolName: "get_weather",
description: "Get current weather",
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
const weather = await fetchWeather(location);
return `Weather in ${location}: ${weather}`;
},
});
const runtime = useDataStreamRuntime({
api: "/api/chat",
body: {
tools: frontendTools({
get_weather: weatherTool,
}),
},
});
```
Backend Tool Processing \[#backend-tool-processing]
Your backend should handle tool calls and return results:
```typescript title="Backend tool handling"
// Tools are automatically forwarded to your endpoint
const { tools } = await request.json();
// Process tools with your AI provider
const response = await ai.generateText({
messages,
tools,
// Tool results are streamed back automatically
});
```
Assistant Cloud Integration \[#assistant-cloud-integration]
For Assistant Cloud deployments, use `useCloudRuntime`:
```tsx
import { useCloudRuntime } from "@assistant-ui/react-data-stream";
const runtime = useCloudRuntime({
cloud: assistantCloud,
assistantId: "my-assistant-id",
});
```
The `useCloudRuntime` hook is currently under active development and not yet ready for production use.
Message Conversion \[#message-conversion]
Framework-Agnostic Conversion (Recommended) \[#framework-agnostic-conversion-recommended]
For custom integrations, use the framework-agnostic utilities from `assistant-stream`:
```tsx
import { toGenericMessages, toToolsJSONSchema } from "assistant-stream";
// Convert messages to a generic format
const genericMessages = toGenericMessages(messages);
// Convert tools to JSON Schema format
const toolSchemas = toToolsJSONSchema(tools);
```
The `GenericMessage` format can be easily converted to any LLM provider format:
```tsx
import type { GenericMessage } from "assistant-stream";
// GenericMessage is a union of:
// - { role: "system"; content: string }
// - { role: "user"; content: (GenericTextPart | GenericFilePart)[] }
// - { role: "assistant"; content: (GenericTextPart | GenericToolCallPart)[] }
// - { role: "tool"; content: GenericToolResultPart[] }
```
AI SDK Specific Conversion \[#ai-sdk-specific-conversion]
For AI SDK integration, use `toLanguageModelMessages`:
```tsx
import { toLanguageModelMessages } from "@assistant-ui/react-data-stream";
// Convert to AI SDK LanguageModelV2Message format
const languageModelMessages = toLanguageModelMessages(messages, {
unstable_includeId: true, // Include message IDs
});
```
`toLanguageModelMessages` internally uses `toGenericMessages` and adds AI SDK-specific transformations.
For new custom integrations, prefer using `toGenericMessages` directly.
Error Handling \[#error-handling]
The runtime automatically handles common error scenarios:
* **Network errors**: Automatically retried with exponential backoff
* **Stream interruptions**: Gracefully handled with partial content preservation
* **Tool execution errors**: Displayed in the UI with error states
* **Cancellation**: Clean abort signal handling
Best Practices \[#best-practices]
Performance Optimization \[#performance-optimization]
```tsx
// Use React.memo for expensive components
const OptimizedThread = React.memo(Thread);
// Memoize runtime configuration
const runtimeConfig = useMemo(() => ({
api: "/api/chat",
headers: { "Authorization": `Bearer ${token}` },
}), [token]);
const runtime = useDataStreamRuntime(runtimeConfig);
```
Error Boundaries \[#error-boundaries]
```tsx
import { ErrorBoundary } from "react-error-boundary";
function ChatErrorFallback({ error, resetErrorBoundary }) {
return (
Something went wrong:
{error.message}
);
}
export default function App() {
return (
);
}
```
State Persistence \[#state-persistence]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
body: {
// Include conversation state
state: conversationState,
},
onFinish: (message) => {
// Save state after each message
saveConversationState(message.metadata.unstable_state);
},
});
```
Examples \[#examples]
Explore our [examples repository](https://github.com/assistant-ui/assistant-ui/tree/main/examples) for implementation references.
API Reference \[#api-reference]
For detailed API documentation, see the [`@assistant-ui/react-data-stream` API Reference](/docs/api-reference/integrations/react-data-stream).
# Helicone
URL: /docs/runtimes/helicone
Configure Helicone proxy for OpenAI API logging and monitoring.
Helicone acts as a proxy for your OpenAI API calls, enabling detailed logging and monitoring. To integrate, update your API base URL and add the Helicone-Auth header.
AI SDK by vercel \[#ai-sdk-by-vercel]
1. **Set Environment Variables:**
* `HELICONE_API_KEY`
* `OPENAI_API_KEY`
2. **Configure the OpenAI client:**
```ts
import { createOpenAI } from "@ai-sdk/openai";
import { streamText } from "ai";
const openai = createOpenAI({
baseURL: "https://oai.helicone.ai/v1",
headers: {
"Helicone-Auth": `Bearer ${process.env.HELICONE_API_KEY}`,
},
});
export async function POST(req: Request) {
const { prompt } = await req.json();
return streamText({
model: openai("gpt-4o"),
prompt,
});
}
```
LangChain Integration (Python) \[#langchain-integration-python]
1. **Set Environment Variables:**
* `HELICONE_API_KEY`
* `OPENAI_API_KEY`
2. **Configure ChatOpenAI:**
```python
from langchain.chat_models import ChatOpenAI
import os
llm = ChatOpenAI(
model_name="gpt-3.5-turbo",
temperature=0,
openai_api_base="https://oai.helicone.ai/v1",
openai_api_key=os.environ["OPENAI_API_KEY"],
openai_api_headers={"Helicone-Auth": f"Bearer {os.environ['HELICONE_API_KEY']}"}
)
```
Summary \[#summary]
Update your API base URL to `https://oai.helicone.ai/v1` and add the `Helicone-Auth` header with your API key either in your Vercel AI SDK or LangChain configuration.
# LangChain LangServe
URL: /docs/runtimes/langserve
Connect to LangServe endpoints via Vercel AI SDK integration.
This integration has not been tested with AI SDK v5.
Overview \[#overview]
Integration with a LangServe server via Vercel AI SDK.
Getting Started \[#getting-started]
Create a Next.JS project \[#create-a-nextjs-project]
```sh
npx create-next-app@latest my-app
cd my-app
```
Install @langchain/core, AI SDK and @assistant-ui/react \[#install-langchaincore-ai-sdk-and-assistant-uireact]
Setup a backend route under /api/chat \[#setup-a-backend-route-under-apichat]
```tsx title="@/app/api/chat/route.ts"
import { RemoteRunnable } from "@langchain/core/runnables/remote";
import { toDataStreamResponse } from "@ai-sdk/langchain";
export const maxDuration = 30;
export async function POST(req: Request) {
const { messages } = await req.json();
// TODO replace with your own langserve URL
const remoteChain = new RemoteRunnable({
url: "",
});
const stream = await remoteChain.stream({
messages,
});
return toDataStreamResponse(stream);
}
```
Define a MyRuntimeProvider component \[#define-a-myruntimeprovider-component]
```tsx twoslash include MyRuntimeProvider title="@/app/MyRuntimeProvider.tsx"
// @filename: /app/MyRuntimeProvider.tsx
// ---cut---
"use client";
import { useChat } from "@ai-sdk/react";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
export function MyRuntimeProvider({
children,
}: Readonly<{
children: React.ReactNode;
}>) {
const runtime = useChatRuntime();
return (
{children}
);
}
```
Wrap your app in MyRuntimeProvider \[#wrap-your-app-in-myruntimeprovider]
```tsx twoslash title="@/app/layout.tsx"
// @include: MyRuntimeProvider
// @filename: /app/layout.tsx
// ---cut---
import type { ReactNode } from "react";
import { MyRuntimeProvider } from "@/app/MyRuntimeProvider";
export default function RootLayout({
children,
}: Readonly<{
children: ReactNode;
}>) {
return (
{children}
);
}
```
# Picking a Runtime
URL: /docs/runtimes/pick-a-runtime
Which runtime fits your backend? Decision guide for common setups.
Choosing the right runtime is crucial for your assistant-ui implementation. This guide helps you navigate the options based on your specific needs.
Quick Decision Tree \[#quick-decision-tree]
Core Runtimes \[#core-runtimes]
These are the foundational runtimes that power assistant-ui:
Pre-Built Integrations \[#pre-built-integrations]
For popular frameworks, we provide ready-to-use integrations built on top of our core runtimes:
Understanding Runtime Architecture \[#understanding-runtime-architecture]
How Pre-Built Integrations Work \[#how-pre-built-integrations-work]
The pre-built integrations (AI SDK, LangGraph, etc.) are **not separate runtime types**. They're convenient wrappers built on top of our core runtimes:
* **AI SDK Integration** → Built on `LocalRuntime` with streaming adapter
* **LangGraph Runtime** → Built on `LocalRuntime` with graph execution adapter
* **LangServe Runtime** → Built on `LocalRuntime` with LangServe client adapter
* **Mastra Runtime** → Built on `LocalRuntime` with workflow adapter
This means you get all the benefits of `LocalRuntime` (automatic state management, built-in features) with zero configuration for your specific framework.
When to Use Pre-Built vs Core Runtimes \[#when-to-use-pre-built-vs-core-runtimes]
**Use a pre-built integration when:**
* You're already using that framework
* You want the fastest possible setup
* The integration covers your needs
**Use a core runtime when:**
* You have a custom backend
* You need features not exposed by the integration
* You want full control over the implementation
Pre-built integrations can always be replaced with a custom `LocalRuntime` or `ExternalStoreRuntime` implementation if you need more control later.
Feature Comparison \[#feature-comparison]
Core Runtime Capabilities \[#core-runtime-capabilities]
| Feature | `LocalRuntime` | `ExternalStoreRuntime` |
| -------------------- | -------------- | ----------------------- |
| **State Management** | Automatic | You control |
| **Setup Complexity** | Simple | Moderate |
| **Message Editing** | Built-in | Implement `onEdit` |
| **Branch Switching** | Built-in | Implement `setMessages` |
| **Regeneration** | Built-in | Implement `onReload` |
| **Cancellation** | Built-in | Implement `onCancel` |
| **Multi-thread** | Via adapters | Via adapters |
Available Adapters \[#available-adapters]
| Adapter | `LocalRuntime` | `ExternalStoreRuntime` |
| ----------- | -------------- | ---------------------- |
| ChatModel | ✅ Required | ❌ N/A |
| Attachments | ✅ | ✅ |
| Speech | ✅ | ✅ |
| Feedback | ✅ | ✅ |
| History | ✅ | ❌ Use your state |
| Suggestions | ✅ | ❌ Use your state |
Common Implementation Patterns \[#common-implementation-patterns]
Vercel AI SDK with Streaming \[#vercel-ai-sdk-with-streaming]
```tsx
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
export function MyAssistant() {
const runtime = useChatRuntime();
return (
);
}
```
Custom Backend with LocalRuntime \[#custom-backend-with-localruntime]
```tsx
import { useLocalRuntime } from "@assistant-ui/react";
const runtime = useLocalRuntime({
async run({ messages, abortSignal }) {
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages }),
signal: abortSignal,
});
return response.json();
},
});
```
Redux Integration with ExternalStoreRuntime \[#redux-integration-with-externalstoreruntime]
```tsx
import { useExternalStoreRuntime } from "@assistant-ui/react";
const messages = useSelector(selectMessages);
const dispatch = useDispatch();
const runtime = useExternalStoreRuntime({
messages,
onNew: async (message) => {
dispatch(addUserMessage(message));
const response = await api.chat(message);
dispatch(addAssistantMessage(response));
},
setMessages: (messages) => dispatch(setMessages(messages)),
onEdit: async (message) => dispatch(editMessage(message)),
onReload: async (parentId) => dispatch(reloadMessage(parentId)),
});
```
Examples \[#examples]
Explore our implementation examples:
* **[AI SDK v6 Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-ai-sdk-v6)** - Vercel AI SDK with `useChatRuntime`
* **[External Store Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-external-store)** - `ExternalStoreRuntime` with custom state
* **[Assistant Cloud Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud)** - Multi-thread with cloud persistence
* **[LangGraph Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-langgraph)** - Agent workflows
Common Pitfalls to Avoid \[#common-pitfalls-to-avoid]
LocalRuntime Pitfalls \[#localruntime-pitfalls]
* **Forgetting the adapter**: `LocalRuntime` requires a `ChatModelAdapter` - it won't work without one
* **Not handling errors**: Always handle API errors in your adapter's `run` function
* **Missing abort signal**: Pass `abortSignal` to your fetch calls for proper cancellation
ExternalStoreRuntime Pitfalls \[#externalstoreruntime-pitfalls]
* **Mutating state**: Always create new arrays/objects when updating messages
* **Missing handlers**: Each UI feature requires its corresponding handler (e.g., no edit button without `onEdit`)
* **Forgetting optimistic updates**: Set `isRunning` to `true` for loading states
General Pitfalls \[#general-pitfalls]
* **Wrong integration level**: Don't use `LocalRuntime` if you already have Vercel AI SDK - use the AI SDK integration instead
* **Over-engineering**: Start with pre-built integrations before building custom solutions
* **Ignoring TypeScript**: The types will guide you to the correct implementation
Next Steps \[#next-steps]
1. **Choose your runtime** based on the decision tree above
2. **Follow the specific guide**:
* [AI SDK Integration](/docs/runtimes/ai-sdk/use-chat)
* [`LocalRuntime` Guide](/docs/runtimes/custom/local)
* [`ExternalStoreRuntime` Guide](/docs/runtimes/custom/external-store)
* [LangGraph Integration](/docs/runtimes/langgraph)
3. **Start with an example** from our [examples repository](https://github.com/assistant-ui/assistant-ui/tree/main/examples)
4. **Add features progressively** using adapters
5. **Consider Assistant Cloud** for production persistence
Need help? Join our [Discord community](https://discord.gg/S9dwgCNEFs) or check the [GitHub](https://github.com/assistant-ui/assistant-ui).
# Adapters
URL: /docs/react-native/adapters
Storage and title generation adapters for React Native.
Adapters customize how the local runtime persists threads and generates titles. Pass them to `useLocalRuntime` via options.
```tsx
import AsyncStorage from "@react-native-async-storage/async-storage";
import {
useLocalRuntime,
createSimpleTitleAdapter,
} from "@assistant-ui/react-native";
const runtime = useLocalRuntime(chatModel, {
storage: AsyncStorage,
titleGenerator: createSimpleTitleAdapter(),
});
```
Storage \[#storage]
The `storage` option accepts any object with `getItem`, `setItem`, and `removeItem` methods (matching the `AsyncStorage` interface). When provided, threads and messages are persisted across app restarts.
```tsx
type AsyncStorageLike = {
getItem(key: string): Promise;
setItem(key: string, value: string): Promise;
removeItem(key: string): Promise;
};
```
AsyncStorage \[#asyncstorage]
The most common choice for React Native:
```tsx
import AsyncStorage from "@react-native-async-storage/async-storage";
const runtime = useLocalRuntime(chatModel, {
storage: AsyncStorage,
});
```
An optional `storagePrefix` parameter namespaces the keys:
```tsx
const runtime = useLocalRuntime(chatModel, {
storage: AsyncStorage,
storagePrefix: "chat:",
// Keys: "chat:threads", "chat:messages:", ...
});
```
No storage (default) \[#no-storage-default]
When `storage` is omitted, threads live in memory only — lost on app restart.
```tsx
const runtime = useLocalRuntime(chatModel);
```
Advanced: createLocalStorageAdapter \[#advanced-createlocalstorageadapter]
For more control, use `createLocalStorageAdapter` directly to create a `RemoteThreadListAdapter`:
```tsx
import AsyncStorage from "@react-native-async-storage/async-storage";
import {
createLocalStorageAdapter,
createSimpleTitleAdapter,
} from "@assistant-ui/react-native";
const adapter = createLocalStorageAdapter({
storage: AsyncStorage,
prefix: "chat:",
titleGenerator: createSimpleTitleAdapter(),
});
```
TitleGenerationAdapter \[#titlegenerationadapter]
Generates a title for a thread based on its messages.
```tsx
interface TitleGenerationAdapter {
generateTitle(messages: ThreadMessage[]): Promise;
}
```
Built-in implementations \[#built-in-implementations]
Simple title adapter \[#simple-title-adapter]
Returns the first 50 characters of the first user message.
```tsx
import { createSimpleTitleAdapter } from "@assistant-ui/react-native";
const titleGenerator = createSimpleTitleAdapter();
```
Custom implementation \[#custom-implementation]
```tsx
import type { TitleGenerationAdapter } from "@assistant-ui/react-native";
const aiTitleGenerator: TitleGenerationAdapter = {
async generateTitle(messages) {
const response = await fetch("/api/generate-title", {
method: "POST",
body: JSON.stringify({ messages }),
});
const { title } = await response.json();
return title;
},
};
```
# Custom Backend
URL: /docs/react-native/custom-backend
Connect your React Native app to your own backend API.
By default, `useLocalRuntime` manages threads and messages on-device. You can connect to your own backend in two ways depending on your needs.
Option 1: ChatModelAdapter only \[#option-1-chatmodeladapter-only]
The simplest approach — keep thread management local, but send messages to your backend for inference.
```tsx title="adapters/my-chat-adapter.ts"
import type { ChatModelAdapter } from "@assistant-ui/react-native";
export const myChatAdapter: ChatModelAdapter = {
async *run({ messages, abortSignal }) {
const response = await fetch("https://my-api.com/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages }),
signal: abortSignal,
});
const reader = response.body?.getReader();
if (!reader) throw new Error("No response body");
const decoder = new TextDecoder();
let fullText = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value, { stream: true });
fullText += chunk;
yield { content: [{ type: "text", text: fullText }] };
}
},
};
```
```tsx title="hooks/use-app-runtime.ts"
import AsyncStorage from "@react-native-async-storage/async-storage";
import { useLocalRuntime } from "@assistant-ui/react-native";
import { myChatAdapter } from "@/adapters/my-chat-adapter";
export function useAppRuntime() {
return useLocalRuntime(myChatAdapter, {
storage: AsyncStorage, // threads + messages persisted locally
});
}
```
This gives you:
* Streaming chat responses from your API
* Local thread list with persistence (AsyncStorage)
* Message history saved across app restarts
Option 2: Full backend thread management \[#option-2-full-backend-thread-management]
When you want your backend to own thread state (e.g. for cross-device sync, team sharing, or server-side history), implement a `RemoteThreadListAdapter`.
Implement the adapter \[#implement-the-adapter]
```tsx title="adapters/my-thread-list-adapter.ts"
import type { RemoteThreadListAdapter } from "@assistant-ui/react-native";
import { createAssistantStream } from "assistant-stream";
const API_BASE = "https://my-api.com";
export const myThreadListAdapter: RemoteThreadListAdapter = {
async list() {
const res = await fetch(`${API_BASE}/threads`);
const threads = await res.json();
return {
threads: threads.map((t: any) => ({
remoteId: t.id,
status: t.archived ? "archived" : "regular",
title: t.title,
})),
};
},
async initialize(localId) {
const res = await fetch(`${API_BASE}/threads`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ localId }),
});
const { id } = await res.json();
return { remoteId: id, externalId: undefined };
},
async rename(remoteId, title) {
await fetch(`${API_BASE}/threads/${remoteId}`, {
method: "PATCH",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ title }),
});
},
async archive(remoteId) {
await fetch(`${API_BASE}/threads/${remoteId}/archive`, {
method: "POST",
});
},
async unarchive(remoteId) {
await fetch(`${API_BASE}/threads/${remoteId}/unarchive`, {
method: "POST",
});
},
async delete(remoteId) {
await fetch(`${API_BASE}/threads/${remoteId}`, { method: "DELETE" });
},
async fetch(remoteId) {
const res = await fetch(`${API_BASE}/threads/${remoteId}`);
const t = await res.json();
return {
remoteId: t.id,
status: t.archived ? "archived" : "regular",
title: t.title,
};
},
async generateTitle(remoteId, messages) {
return createAssistantStream(async (controller) => {
const res = await fetch(`${API_BASE}/threads/${remoteId}/title`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages }),
});
const { title } = await res.json();
controller.appendText(title);
});
},
};
```
Compose the runtime \[#compose-the-runtime]
```tsx title="hooks/use-app-runtime.ts"
import {
useLocalRuntime,
useRemoteThreadListRuntime,
} from "@assistant-ui/react-native";
import { myChatAdapter } from "@/adapters/my-chat-adapter";
import { myThreadListAdapter } from "@/adapters/my-thread-list-adapter";
export function useAppRuntime() {
return useRemoteThreadListRuntime({
runtimeHook: () => useLocalRuntime(myChatAdapter),
adapter: myThreadListAdapter,
});
}
```
Use in your app \[#use-in-your-app]
```tsx title="app/index.tsx"
import { AssistantProvider } from "@assistant-ui/react-native";
import { useAppRuntime } from "@/hooks/use-app-runtime";
export default function App() {
const runtime = useAppRuntime();
return (
{/* your chat UI */}
);
}
```
Adapter methods \[#adapter-methods]
| Method | Description |
| ----------------------------------- | ---------------------------------------------------- |
| `list()` | Return all threads on mount |
| `initialize(localId)` | Create a thread server-side, return `{ remoteId }` |
| `rename(remoteId, title)` | Persist title changes |
| `archive(remoteId)` | Mark thread as archived |
| `unarchive(remoteId)` | Restore archived thread |
| `delete(remoteId)` | Permanently remove thread |
| `fetch(remoteId)` | Fetch single thread metadata |
| `generateTitle(remoteId, messages)` | Return an `AssistantStream` with the generated title |
Which option to choose? \[#which-option-to-choose]
| | Option 1: ChatModelAdapter | Option 2: RemoteThreadListAdapter |
| --------------------- | ------------------------------ | --------------------------------------------------- |
| **Thread storage** | On-device (AsyncStorage) | Your backend |
| **Message storage** | On-device (AsyncStorage) | On-device (can add history adapter for server-side) |
| **Cross-device sync** | No | Yes |
| **Setup complexity** | Minimal | Moderate |
| **Best for** | Single-device apps, prototypes | Production apps with user accounts |
# Hooks
URL: /docs/react-native/hooks
Reactive hooks for accessing runtime state in React Native.
All hooks support an optional **selector** function for fine-grained re-renders. Without a selector, the component re-renders on every state change. With a selector, it only re-renders when the selected value changes (shallow equality).
```tsx
// Re-renders on every thread state change
const thread = useThread();
// Re-renders only when isRunning changes
const isRunning = useThread((s) => s.isRunning);
```
State Hooks \[#state-hooks]
useThread \[#usethread]
Access thread state.
```tsx
import { useThread } from "@assistant-ui/react-native";
const messages = useThread((s) => s.messages);
const isRunning = useThread((s) => s.isRunning);
```
**ThreadState fields:**
| Field | Type | Description |
| -------------- | --------------------- | ------------------------------- |
| `messages` | `ThreadMessage[]` | All messages in the thread |
| `isRunning` | `boolean` | Whether the model is generating |
| `isDisabled` | `boolean` | Whether the thread is disabled |
| `capabilities` | `RuntimeCapabilities` | What actions are supported |
useComposer \[#usecomposer]
Access composer state.
```tsx
import { useComposer } from "@assistant-ui/react-native";
const text = useComposer((s) => s.text);
const isEmpty = useComposer((s) => s.isEmpty);
```
**ComposerState fields:**
| Field | Type | Description |
| ------------- | -------------- | ------------------------------ |
| `text` | `string` | Current input text |
| `isEmpty` | `boolean` | Whether the input is empty |
| `attachments` | `Attachment[]` | Current attachments |
| `canCancel` | `boolean` | Whether a run can be cancelled |
useMessage \[#usemessage]
Access the current message state. Must be used inside a `MessageProvider` or a `renderMessage` / `renderItem` callback.
```tsx
import { useMessage } from "@assistant-ui/react-native";
const role = useMessage((s) => s.role);
const isLast = useMessage((s) => s.isLast);
```
**MessageState fields:**
| Field | Type | Description |
| -------------- | --------------- | -------------------------------------- |
| `message` | `ThreadMessage` | The message object |
| `role` | `MessageRole` | `"user"`, `"assistant"`, or `"system"` |
| `isLast` | `boolean` | Whether this is the last message |
| `branchNumber` | `number` | Current branch index |
| `branchCount` | `number` | Total number of branches |
useContentPart \[#usecontentpart]
Access a specific content part by index within a message.
```tsx
import { useContentPart } from "@assistant-ui/react-native";
const part = useContentPart(0); // first content part
```
useThreadList \[#usethreadlist]
Access the thread list state.
```tsx
import { useThreadList } from "@assistant-ui/react-native";
const { threadIds, mainThreadId, threadItems } = useThreadList();
// With selector
const mainThreadId = useThreadList((s) => s.mainThreadId);
```
**ThreadListState fields:**
| Field | Type | Description |
| -------------- | ------------------------------------- | ----------------------------- |
| `threadIds` | `string[]` | All thread IDs |
| `mainThreadId` | `string` | Currently active thread ID |
| `threadItems` | `Record` | Thread metadata (title, etc.) |
Runtime Hooks \[#runtime-hooks]
useAssistantRuntime \[#useassistantruntime]
Get the `AssistantRuntime` from context. This is the top-level runtime with access to thread management.
```tsx
import { useAssistantRuntime } from "@assistant-ui/react-native";
const runtime = useAssistantRuntime();
// Switch threads
runtime.threads.switchToThread(threadId);
runtime.threads.switchToNewThread();
// Thread management
runtime.threads.rename(threadId, "New Title");
runtime.threads.delete(threadId);
// Access the current thread runtime
runtime.thread; // ThreadRuntime
```
useThreadRuntime \[#usethreadruntime]
Get the `ThreadRuntime` for the current thread.
```tsx
import { useThreadRuntime } from "@assistant-ui/react-native";
const threadRuntime = useThreadRuntime();
threadRuntime.cancelRun();
threadRuntime.appendMessage(message);
```
useComposerRuntime \[#usecomposerruntime]
Get the `ComposerRuntime` for the current composer.
```tsx
import { useComposerRuntime } from "@assistant-ui/react-native";
const composerRuntime = useComposerRuntime();
composerRuntime.setText("Hello");
composerRuntime.send();
```
useMessageRuntime \[#usemessageruntime]
Get the `MessageRuntime` for the current message.
```tsx
import { useMessageRuntime } from "@assistant-ui/react-native";
const messageRuntime = useMessageRuntime();
messageRuntime.reload();
messageRuntime.switchToBranch({ position: "next" });
```
useLocalRuntime \[#uselocalruntime]
Create an `AssistantRuntime` with a `ChatModelAdapter`. Optionally pass `storage` for persistence.
```tsx
import { useLocalRuntime } from "@assistant-ui/react-native";
import AsyncStorage from "@react-native-async-storage/async-storage";
const runtime = useLocalRuntime(chatModel, {
storage: AsyncStorage,
titleGenerator: createSimpleTitleAdapter(),
});
```
| Option | Type | Description |
| ----------------- | ------------------------ | --------------------------------------------------- |
| `initialMessages` | `ThreadMessageLike[]` | Messages to pre-populate |
| `storage` | `AsyncStorageLike` | Thread and message persistence |
| `storagePrefix` | `string` | Key prefix for storage (default `"@assistant-ui:"`) |
| `titleGenerator` | `TitleGenerationAdapter` | Auto-generate thread titles |
Primitive Hooks \[#primitive-hooks]
Low-level hooks for building custom components.
useThreadMessages \[#usethreadmessages]
```tsx
import { useThreadMessages } from "@assistant-ui/react-native";
const messages = useThreadMessages(); // ThreadMessage[]
```
useThreadIsRunning \[#usethreadisrunning]
```tsx
import { useThreadIsRunning } from "@assistant-ui/react-native";
const isRunning = useThreadIsRunning(); // boolean
```
useThreadIsEmpty \[#usethreadisempty]
```tsx
import { useThreadIsEmpty } from "@assistant-ui/react-native";
const isEmpty = useThreadIsEmpty(); // boolean
```
useComposerSend \[#usecomposersend]
```tsx
import { useComposerSend } from "@assistant-ui/react-native";
const { send, canSend } = useComposerSend();
```
useComposerCancel \[#usecomposercancel]
```tsx
import { useComposerCancel } from "@assistant-ui/react-native";
const { cancel, canCancel } = useComposerCancel();
```
useComposerAddAttachment \[#usecomposeraddattachment]
Add an attachment to the composer. Returns `{ addAttachment }` — call it with a `File` or `CreateAttachment` object. Pair with your own file picker (e.g. `expo-image-picker`).
```tsx
import { useComposerAddAttachment } from "@assistant-ui/react-native";
const { addAttachment } = useComposerAddAttachment();
// With expo-image-picker
const pickImage = async () => {
const result = await ImagePicker.launchImageLibraryAsync({
base64: true,
quality: 0.8,
});
if (result.canceled) return;
for (const asset of result.assets) {
await addAttachment({
name: asset.fileName ?? "image.jpg",
type: "image",
content: [
{ type: "image", image: `data:image/jpeg;base64,${asset.base64}` },
],
});
}
};
```
You must configure an `AttachmentAdapter` (e.g. `SimpleImageAttachmentAdapter`) in your runtime options for attachments to work.
useMessageReload \[#usemessagereload]
```tsx
import { useMessageReload } from "@assistant-ui/react-native";
const { reload, canReload } = useMessageReload();
```
useMessageBranching \[#usemessagebranching]
```tsx
import { useMessageBranching } from "@assistant-ui/react-native";
const { branchNumber, branchCount, goToPrev, goToNext } =
useMessageBranching();
```
Model Context Hooks \[#model-context-hooks]
useAssistantTool \[#useassistanttool]
Register a tool with an optional UI renderer. The tool definition is forwarded to the model, and when the model calls it, the `execute` function runs and the `render` component displays the result.
```tsx
import { useAssistantTool } from "@assistant-ui/react-native";
useAssistantTool({
toolName: "get_weather",
description: "Get the current weather for a city",
parameters: {
type: "object",
properties: {
city: { type: "string" },
},
required: ["city"],
},
execute: async ({ city }) => {
const res = await fetch(`https://api.weather.example/${city}`);
return res.json();
},
render: ({ args, result }) => (
{args.city}: {result?.temperature}°F
),
});
```
useAssistantToolUI \[#useassistanttoolui]
Register only a UI renderer for a tool (without tool definition or execute function).
```tsx
import { useAssistantToolUI } from "@assistant-ui/react-native";
useAssistantToolUI({
toolName: "get_weather",
render: ({ args, result, status }) => (
{status?.type === "running"
? Loading weather for {args.city}...
: {args.city}: {result?.temperature}°F}
),
});
```
useAssistantInstructions \[#useassistantinstructions]
Register system instructions in the model context.
```tsx
import { useAssistantInstructions } from "@assistant-ui/react-native";
useAssistantInstructions("You are a helpful weather assistant.");
```
makeAssistantTool \[#makeassistanttool]
Create a component that registers a tool when mounted. Useful for declarative tool registration.
```tsx
import { makeAssistantTool } from "@assistant-ui/react-native";
const WeatherTool = makeAssistantTool({
toolName: "get_weather",
description: "Get weather",
parameters: { type: "object", properties: { city: { type: "string" } }, required: ["city"] },
execute: async ({ city }) => ({ temperature: 72 }),
render: ({ args, result }) => {args.city}: {result?.temperature}°F,
});
// Mount inside AssistantProvider to register
```
# Getting Started
URL: /docs/react-native
Build AI chat interfaces for iOS and Android with @assistant-ui/react-native.
Overview \[#overview]
`@assistant-ui/react-native` brings assistant-ui to React Native. It provides composable primitives, reactive hooks, and a local runtime — the same layered architecture as the web package, built on native components (`View`, `TextInput`, `FlatList`, `Pressable`).
**Key features:**
* **Primitives** — `Thread`, `Composer`, `Message`, `ThreadList`, `ThreadListItem`, `ChainOfThought`, `Suggestion`, `ActionBar`, `BranchPicker`, `Attachment` components that compose with standard React Native props
* **Reactive state** — `useAuiState` with selector support for fine-grained re-renders
* **Local runtime** — `useLocalRuntime` with pluggable `ChatModelAdapter` for any LLM API
* **Thread management** — Multi-thread support with create, switch, rename, delete
* **Tool system** — `useAssistantTool`, `makeAssistantTool` for registering tools with custom UI renderers
* **Attachments** — Image and file attachments with `useComposerAddAttachment` and attachment primitives
`@assistant-ui/react-native` shares its runtime core with `@assistant-ui/react` via `@assistant-ui/core`. The type system, state management, and runtime logic are identical — only the UI layer differs.
Getting Started \[#getting-started]
This guide uses [Expo](https://expo.dev) with the OpenAI API. You can substitute any LLM provider.
Create an Expo project \[#create-an-expo-project]
```sh
npx create-expo-app@latest my-chat-app
cd my-chat-app
```
Install dependencies \[#install-dependencies]
```sh
npx expo install @assistant-ui/react-native
```
Create a ChatModelAdapter \[#create-a-chatmodeladapter]
The adapter connects your LLM API to the runtime. Here's an example for the OpenAI chat completions API with streaming:
```tsx title="adapters/openai-chat-adapter.ts"
import type {
ChatModelAdapter,
ChatModelRunResult,
} from "@assistant-ui/react-native";
type OpenAIModelConfig = {
apiKey: string;
model?: string;
baseURL?: string;
fetch?: typeof globalThis.fetch;
};
export function createOpenAIChatModelAdapter(
config: OpenAIModelConfig,
): ChatModelAdapter {
const {
apiKey,
model = "gpt-4o-mini",
baseURL = "https://api.openai.com/v1",
fetch: customFetch = globalThis.fetch,
} = config;
return {
async *run({ messages, abortSignal }) {
const openAIMessages = messages
.filter((m) => m.role !== "system")
.map((m) => ({
role: m.role as "user" | "assistant",
content: m.content
.filter((p) => p.type === "text")
.map((p) => ("text" in p ? p.text : ""))
.join("\n"),
}));
const response = await customFetch(
`${baseURL}/chat/completions`,
{
method: "POST",
headers: {
"Content-Type": "application/json",
Authorization: `Bearer ${apiKey}`,
},
body: JSON.stringify({
model,
messages: openAIMessages,
stream: true,
}),
signal: abortSignal,
},
);
if (!response.ok) {
const body = await response.text().catch(() => "");
throw new Error(
`OpenAI API error: ${response.status} ${body}`,
);
}
const reader = response.body?.getReader();
if (!reader) {
const json = await response.json();
const text = json.choices?.[0]?.message?.content ?? "";
yield {
content: [{ type: "text" as const, text }],
} satisfies ChatModelRunResult;
return;
}
const decoder = new TextDecoder();
let fullText = "";
try {
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value, { stream: true });
for (const line of chunk.split("\n")) {
if (!line.startsWith("data: ")) continue;
const data = line.slice(6);
if (data === "[DONE]") continue;
try {
const parsed = JSON.parse(data);
const content =
parsed.choices?.[0]?.delta?.content ?? "";
fullText += content;
yield {
content: [
{ type: "text" as const, text: fullText },
],
} satisfies ChatModelRunResult;
} catch {
// skip invalid JSON
}
}
}
} finally {
reader.releaseLock();
}
},
};
}
```
On Expo, import `fetch` from `expo/fetch` for streaming support and pass it as the `fetch` option.
Set up the runtime \[#set-up-the-runtime]
```tsx title="hooks/use-app-runtime.ts"
import { useMemo } from "react";
import { fetch } from "expo/fetch";
import { useLocalRuntime } from "@assistant-ui/react-native";
import { createOpenAIChatModelAdapter } from "@/adapters/openai-chat-adapter";
export function useAppRuntime() {
const chatModel = useMemo(
() =>
createOpenAIChatModelAdapter({
apiKey: process.env.EXPO_PUBLIC_OPENAI_API_KEY ?? "",
model: "gpt-4o-mini",
fetch,
}),
[],
);
return useLocalRuntime(chatModel);
}
```
Build the UI \[#build-the-ui]
Wrap your app with `AssistantProvider` — thread and composer contexts are set up automatically:
```tsx title="app/index.tsx"
import {
AssistantProvider,
useAuiState,
useAui,
} from "@assistant-ui/react-native";
import type { ThreadMessage } from "@assistant-ui/react-native";
import {
View,
Text,
TextInput,
FlatList,
Pressable,
KeyboardAvoidingView,
Platform,
} from "react-native";
import { useAppRuntime } from "@/hooks/use-app-runtime";
function MessageBubble({ message }: { message: ThreadMessage }) {
const isUser = message.role === "user";
const text = message.content
.filter((p) => p.type === "text")
.map((p) => ("text" in p ? p.text : ""))
.join("\n");
return (
{text}
);
}
function Composer() {
const aui = useAui();
const text = useAuiState((s) => s.composer.text);
const isEmpty = useAuiState((s) => s.composer.isEmpty);
return (
aui.composer().setText(t)}
placeholder="Message..."
multiline
style={{
flex: 1,
borderWidth: 1,
borderColor: "#ddd",
borderRadius: 20,
paddingHorizontal: 16,
paddingVertical: 10,
maxHeight: 120,
}}
/>
aui.composer().send()}
disabled={isEmpty}
style={{
marginLeft: 8,
backgroundColor: !isEmpty ? "#007aff" : "#ccc",
borderRadius: 20,
width: 36,
height: 36,
justifyContent: "center",
alignItems: "center",
}}
>
↑
);
}
function ChatScreen() {
const messages = useAuiState(
(s) => s.thread.messages,
) as ThreadMessage[];
return (
m.id}
renderItem={({ item }) => }
/>
);
}
export default function App() {
const runtime = useAppRuntime();
return (
);
}
```
Architecture \[#architecture]
```
useLocalRuntime(chatModel, options?)
└─ AssistantProvider
├─ useAuiState((s) => s.thread) → thread state (messages, isRunning, …)
├─ useAuiState((s) => s.composer) → composer state (text, isEmpty, …)
├─ useAuiState((s) => s.message) → single message state (inside renderItem)
└─ Primitives → ThreadRoot, ComposerInput, MessageContent, …
```
The runtime core is shared with `@assistant-ui/react` — only the UI primitives are React Native-specific.
Example \[#example]
For a complete Expo example with drawer navigation, thread list, and styled chat UI, see the [`with-expo` example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-expo).
# Primitives
URL: /docs/react-native/primitives
Composable React Native components for building chat UIs.
Primitives are thin wrappers around React Native components (`View`, `TextInput`, `FlatList`, `Pressable`) that integrate with the assistant-ui runtime. They accept all standard React Native props and add runtime-aware behavior.
Many primitives share their core logic with `@assistant-ui/react` via `@assistant-ui/core/react` — only the UI layer (View/Pressable/Text vs DOM elements) differs.
Thread \[#thread]
ThreadRoot \[#threadroot]
Container `View` for the thread area.
```tsx
import { ThreadRoot } from "@assistant-ui/react-native";
{children}
```
| Prop | Type | Description |
| --------- | ----------- | -------------------------------- |
| `...rest` | `ViewProps` | Standard React Native View props |
ThreadMessages \[#threadmessages]
`FlatList`-based message list with automatic runtime integration. Each message is wrapped in a scoped context so that `useAuiState((s) => s.message)` works inside `renderMessage`.
```tsx
import { ThreadMessages } from "@assistant-ui/react-native";
}
/>
```
| Prop | Type | Description |
| --------------- | ------------------------------------------------------------------- | ----------------------------------------------------- |
| `renderMessage` | `(info: { message: ThreadMessage; index: number }) => ReactElement` | Message renderer |
| `...rest` | `FlatListProps` | Standard FlatList props (except `data`, `renderItem`) |
ThreadEmpty \[#threadempty]
Renders children only when the thread has no messages.
```tsx
import { ThreadEmpty } from "@assistant-ui/react-native";
Send a message to get started
```
ThreadIf \[#threadif]
Conditional rendering based on thread state.
```tsx
import { ThreadIf } from "@assistant-ui/react-native";
No messages yet
```
| Prop | Type | Description |
| --------- | --------- | ----------------------------- |
| `empty` | `boolean` | Render when thread is empty |
| `running` | `boolean` | Render when thread is running |
Composer \[#composer]
ComposerRoot \[#composerroot]
Container `View` for the composer area.
```tsx
import { ComposerRoot } from "@assistant-ui/react-native";
{children}
```
ComposerInput \[#composerinput]
`TextInput` wired to the composer runtime. Value and `onChangeText` are managed automatically.
```tsx
import { ComposerInput } from "@assistant-ui/react-native";
```
| Prop | Type | Description |
| --------- | ---------------- | --------------------------------------------------------- |
| `...rest` | `TextInputProps` | Standard TextInput props (except `value`, `onChangeText`) |
ComposerSend \[#composersend]
`Pressable` that sends the current message. Automatically disabled when the composer is empty.
```tsx
import { ComposerSend } from "@assistant-ui/react-native";
Send
```
ComposerCancel \[#composercancel]
`Pressable` that cancels the current run. Automatically disabled when no run is active.
```tsx
import { ComposerCancel } from "@assistant-ui/react-native";
Stop
```
ComposerAttachments \[#composerattachments]
Renders composer attachments using the provided component configuration.
```tsx
import { ComposerAttachments } from "@assistant-ui/react-native";
```
| Prop | Type | Description |
| ------------ | ------------------------------------------- | -------------------------------------- |
| `components` | `{ Image?, Document?, File?, Attachment? }` | Component renderers by attachment type |
Message \[#message]
MessageRoot \[#messageroot]
Container `View` for a single message.
```tsx
import { MessageRoot } from "@assistant-ui/react-native";
{children}
```
MessageContent \[#messagecontent]
Renders message content parts. Tool call and data parts automatically render registered tool UIs (via `useAssistantTool` / `useAssistantDataUI`), falling back to render props if provided.
```tsx
import { MessageContent } from "@assistant-ui/react-native";
{part.text}}
renderImage={({ part }) => }
/>
```
| Prop | Type | Description |
| ----------------- | ------------------------------------------ | ------------------------------------------------------- |
| `renderText` | `(props: { part; index }) => ReactElement` | Text part renderer |
| `renderToolCall` | `(props: { part; index }) => ReactElement` | Tool call fallback (used when no tool UI is registered) |
| `renderImage` | `(props: { part; index }) => ReactElement` | Image part renderer |
| `renderReasoning` | `(props: { part; index }) => ReactElement` | Reasoning part renderer |
| `renderSource` | `(props: { part; index }) => ReactElement` | Source part renderer |
| `renderFile` | `(props: { part; index }) => ReactElement` | File part renderer |
| `renderData` | `(props: { part; index }) => ReactElement` | Data part fallback (used when no data UI is registered) |
MessageIf \[#messageif]
Conditional rendering based on message properties.
```tsx
import { MessageIf } from "@assistant-ui/react-native";
You said:
```
| Prop | Type | Description |
| ----------- | --------- | -------------------------------------- |
| `user` | `boolean` | Render for user messages |
| `assistant` | `boolean` | Render for assistant messages |
| `running` | `boolean` | Render when message is being generated |
| `last` | `boolean` | Render for the last message |
MessageAttachments \[#messageattachments]
Renders user message attachments using the provided component configuration.
```tsx
import { MessageAttachments } from "@assistant-ui/react-native";
```
| Prop | Type | Description |
| ------------ | ------------------------------------------- | -------------------------------------- |
| `components` | `{ Image?, Document?, File?, Attachment? }` | Component renderers by attachment type |
Attachment \[#attachment]
Primitives for rendering individual attachments (inside `ComposerAttachments` or `MessageAttachments`).
AttachmentRoot \[#attachmentroot]
Container `View` for an attachment.
```tsx
import { AttachmentRoot } from "@assistant-ui/react-native";
{children}
```
AttachmentName \[#attachmentname]
`Text` component displaying the attachment filename.
```tsx
import { AttachmentName } from "@assistant-ui/react-native";
```
AttachmentThumb \[#attachmentthumb]
`Text` component displaying the file extension (e.g. `.pdf`).
```tsx
import { AttachmentThumb } from "@assistant-ui/react-native";
```
AttachmentRemove \[#attachmentremove]
`Pressable` that removes the attachment from the composer.
```tsx
import { AttachmentRemove } from "@assistant-ui/react-native";
Remove
```
ActionBar \[#actionbar]
ActionBarCopy \[#actionbarcopy]
`Pressable` that copies the message content. Supports function-as-children for copy state feedback.
```tsx
import { ActionBarCopy } from "@assistant-ui/react-native";
{({ isCopied }) => {isCopied ? "Copied!" : "Copy"}}
```
| Prop | Type | Description |
| ----------------- | ------------------------ | ----------------------------------------------------- |
| `copiedDuration` | `number` | Duration in ms to show "copied" state (default: 3000) |
| `copyToClipboard` | `(text: string) => void` | Custom clipboard function |
ActionBarEdit \[#actionbaredit]
`Pressable` that enters edit mode for a message.
```tsx
import { ActionBarEdit } from "@assistant-ui/react-native";
Edit
```
ActionBarReload \[#actionbarreload]
`Pressable` that regenerates an assistant message.
```tsx
import { ActionBarReload } from "@assistant-ui/react-native";
Retry
```
ActionBarFeedbackPositive / ActionBarFeedbackNegative \[#actionbarfeedbackpositive--actionbarfeedbacknegative]
`Pressable` buttons for submitting message feedback.
```tsx
import {
ActionBarFeedbackPositive,
ActionBarFeedbackNegative,
} from "@assistant-ui/react-native";
{({ isSubmitted }) => {isSubmitted ? "👍" : "👍🏻"}}
{({ isSubmitted }) => {isSubmitted ? "👎" : "👎🏻"}}
```
BranchPicker \[#branchpicker]
BranchPickerPrevious / BranchPickerNext \[#branchpickerprevious--branchpickernext]
`Pressable` buttons to navigate between message branches.
```tsx
import {
BranchPickerPrevious,
BranchPickerNext,
BranchPickerNumber,
BranchPickerCount,
} from "@assistant-ui/react-native";
←/→
```
BranchPickerNumber / BranchPickerCount \[#branchpickernumber--branchpickercount]
`Text` components displaying the current branch number and total count.
ThreadList \[#threadlist]
ThreadListRoot \[#threadlistroot]
Container `View` for the thread list.
```tsx
import { ThreadListRoot } from "@assistant-ui/react-native";
{children}
```
ThreadListItems \[#threadlistitems]
`FlatList` of thread IDs with runtime integration.
```tsx
import { ThreadListItems } from "@assistant-ui/react-native";
(
)}
/>
```
| Prop | Type | Description |
| ------------ | -------------------------------------------------------------- | ----------------------------------------------------- |
| `renderItem` | `(props: { threadId: string; index: number }) => ReactElement` | Thread item renderer |
| `...rest` | `FlatListProps` | Standard FlatList props (except `data`, `renderItem`) |
ThreadListNew \[#threadlistnew]
`Pressable` that creates a new thread.
```tsx
import { ThreadListNew } from "@assistant-ui/react-native";
New Chat
```
ThreadListItem \[#threadlistitem]
Primitives for rendering individual thread list items. Use inside a `ThreadListItemByIndexProvider` or within `ThreadListPrimitiveItems` components.
ThreadListItemRoot \[#threadlistitemroot]
Container `View` for a thread list item.
```tsx
import { ThreadListItemRoot } from "@assistant-ui/react-native";
{children}
```
ThreadListItemTitle \[#threadlistitemtitle]
Renders the thread title text. Falls back to the provided fallback when title is empty. This component is shared from `@assistant-ui/core/react`.
```tsx
import { ThreadListItemTitle } from "@assistant-ui/react-native";
```
| Prop | Type | Description |
| ---------- | ----------- | ----------------------------------- |
| `fallback` | `ReactNode` | Content to show when title is empty |
ThreadListItemTrigger \[#threadlistitemtrigger]
`Pressable` that switches to the thread.
```tsx
import { ThreadListItemTrigger } from "@assistant-ui/react-native";
```
ThreadListItemDelete \[#threadlistitemdelete]
`Pressable` that deletes the thread.
```tsx
import { ThreadListItemDelete } from "@assistant-ui/react-native";
Delete
```
ThreadListItemArchive / ThreadListItemUnarchive \[#threadlistitemarchive--threadlistitemunarchive]
`Pressable` buttons that archive or unarchive the thread.
```tsx
import {
ThreadListItemArchive,
ThreadListItemUnarchive,
} from "@assistant-ui/react-native";
ArchiveUnarchive
```
Suggestion \[#suggestion]
Primitives for rendering suggestions. Use inside a `SuggestionByIndexProvider` (from `@assistant-ui/core/react`).
SuggestionTitle \[#suggestiontitle]
`Text` component displaying the suggestion title.
```tsx
import { SuggestionTitle } from "@assistant-ui/react-native";
```
SuggestionDescription \[#suggestiondescription]
`Text` component displaying the suggestion description/label.
```tsx
import { SuggestionDescription } from "@assistant-ui/react-native";
```
SuggestionTrigger \[#suggestiontrigger]
`Pressable` that triggers the suggestion action (send or insert into composer).
```tsx
import { SuggestionTrigger } from "@assistant-ui/react-native";
```
| Prop | Type | Description |
| --------------- | --------- | --------------------------------------------------------------- |
| `send` | `boolean` | When true, sends immediately; when false, inserts into composer |
| `clearComposer` | `boolean` | Whether to clear/replace composer text (default: true) |
ThreadSuggestion \[#threadsuggestion]
Inline `Pressable` that triggers a suggestion with a specified prompt. Use this when you want to hardcode suggestion text.
```tsx
import { ThreadSuggestion } from "@assistant-ui/react-native";
Tell me a joke
```
| Prop | Type | Description |
| --------------- | --------- | ------------------------------------------------------ |
| `prompt` | `string` | The suggestion text |
| `send` | `boolean` | When true, sends immediately |
| `clearComposer` | `boolean` | Whether to clear/replace composer text (default: true) |
ThreadPrimitiveSuggestions \[#threadprimitivesuggestions]
Renders all suggestions from the store. Shared from `@assistant-ui/core/react`.
```tsx
import { ThreadPrimitiveSuggestions } from "@assistant-ui/react-native";
```
ChainOfThought \[#chainofthought]
Primitives for rendering chain of thought content (grouped reasoning and tool-call parts).
ChainOfThoughtRoot \[#chainofthoughtroot]
Container `View` for chain of thought content.
```tsx
import { ChainOfThoughtRoot } from "@assistant-ui/react-native";
{children}
```
ChainOfThoughtAccordionTrigger \[#chainofthoughtaccordiontrigger]
`Pressable` that toggles the collapsed state of the chain of thought.
```tsx
import { ChainOfThoughtAccordionTrigger } from "@assistant-ui/react-native";
Toggle reasoning
```
ChainOfThoughtPrimitiveParts \[#chainofthoughtprimitiveparts]
Renders the parts within a chain of thought. Shared from `@assistant-ui/core/react`.
```tsx
import { ChainOfThoughtPrimitiveParts } from "@assistant-ui/react-native";
{text},
tools: { Fallback: MyToolComponent },
}}
/>
```
Additional Composer Primitives \[#additional-composer-primitives]
ComposerAddAttachment \[#composeraddattachment]
`Pressable` for triggering attachment addition. The actual file picker must be implemented by the consumer (e.g. using `expo-document-picker` or `expo-image-picker`).
```tsx
import { ComposerAddAttachment } from "@assistant-ui/react-native";
📎
```
ComposerIf \[#composerif]
Conditional rendering based on composer state. Shared from `@assistant-ui/core/react`.
```tsx
import { ComposerIf } from "@assistant-ui/react-native";
Currently editing
```
| Prop | Type | Description |
| ----------- | --------- | --------------------------------------- |
| `editing` | `boolean` | Render when composer is in editing mode |
| `dictation` | `boolean` | Render when dictation is active |
Shared Primitives from core/react \[#shared-primitives-from-corereact]
The following primitives are shared with `@assistant-ui/react` via `@assistant-ui/core/react` and work identically on both platforms:
* **`ThreadPrimitiveMessages`** / **`ThreadPrimitiveMessageByIndex`** — Component-composition pattern for rendering messages (alternative to the FlatList-based `ThreadMessages`)
* **`MessagePrimitiveParts`** / **`MessagePrimitivePartByIndex`** — Renders message parts with grouping, ChainOfThought, and Empty support (alternative to the render-props-based `MessageContent`). Uses `` as default text renderer on RN.
* **`ThreadPrimitiveSuggestions`** / **`ThreadPrimitiveSuggestionByIndex`** — Renders suggestions from the store
* **`ThreadListPrimitiveItems`** / **`ThreadListPrimitiveItemByIndex`** — Component-composition pattern for thread list items (alternative to the FlatList-based `ThreadListItems`)
* **`ChainOfThoughtPrimitiveParts`** — Renders chain of thought parts
```tsx
import {
ThreadPrimitiveMessages,
MessagePrimitiveParts,
ThreadPrimitiveSuggestions,
} from "@assistant-ui/react-native";
```
# Accordion
URL: /docs/ui/accordion
A vertically stacked set of interactive headings that reveal or hide content sections.
import { PreviewCode } from "@/components/docs/preview-code.server";
import {
AccordionSample,
AccordionVariantsSample,
AccordionMultipleSample,
AccordionWithIconsSample,
AccordionControlledSample,
AccordionFAQSample,
} from "@/components/docs/samples/accordion";
This is a **standalone component** that does not depend on the assistant-ui runtime. Use it anywhere in your application.
Installation \[#installation]
Usage \[#usage]
```tsx
import {
Accordion,
AccordionItem,
AccordionTrigger,
AccordionContent,
} from "@/components/assistant-ui/accordion";
export function Example() {
return (
Section 1Content for section 1.Section 2Content for section 2.
);
}
```
Examples \[#examples]
Variants \[#variants]
Use the `variant` prop on `Accordion` to change the visual style. Child components inherit the variant automatically.
```tsx
// Default - border-bottom separator
......
// Outline - bordered container
......
// Ghost - separated cards
......
```
Multiple Items Open \[#multiple-items-open]
Use `type="multiple"` to allow multiple items to be open simultaneously.
```tsx
First SectionContent 1Second SectionContent 2
```
With Icons \[#with-icons]
Add icons or custom elements inside the trigger.
Controlled \[#controlled]
Use `value` and `onValueChange` for controlled accordion state.
FAQ Section \[#faq-section]
A practical example of using accordion for a FAQ section.
API Reference \[#api-reference]
Composable API \[#composable-api]
| Component | Description |
| ------------------ | ------------------------------------------------------------ |
| `Accordion` | The root component that manages accordion state and variant. |
| `AccordionItem` | A single collapsible section container. |
| `AccordionTrigger` | The clickable header that toggles content visibility. |
| `AccordionContent` | The collapsible content panel. |
Accordion \[#accordion]
The root component that manages accordion state. Set `variant` here to style all child components.
void",
description: "Callback when the open item(s) change.",
},
{
name: "variant",
type: '"default" | "outline" | "ghost"',
default: '"default"',
description: "The visual style of the accordion. Child components inherit this automatically.",
},
{
name: "className",
type: "string",
description: "Additional CSS classes.",
},
]}
/>
AccordionItem \[#accordionitem]
A single collapsible section container.
AccordionTrigger \[#accordiontrigger]
The clickable header that toggles content visibility.
AccordionContent \[#accordioncontent]
The collapsible content panel.
Style Variants (CVA) \[#style-variants-cva]
| Export | Description |
| ------------------- | ----------------------------------- |
| `accordionVariants` | Styles for the accordion container. |
```tsx
import { accordionVariants } from "@/components/assistant-ui/accordion";
Custom Accordion Container
```
# AssistantModal
URL: /docs/ui/assistant-modal
Floating chat bubble for support widgets and help desks.
import { AssistantModalSample } from "@/components/docs/samples/assistant-modal";
A floating chat modal built on Radix UI Popover. Ideal for support widgets, help desks, and embedded assistants.
Getting Started \[#getting-started]
Add assistant-modal \[#add-assistant-modal]
This adds `/components/assistant-ui/assistant-modal.tsx` to your project, which you can adjust as needed.
Use in your application \[#use-in-your-application]
```tsx title="/app/page.tsx" {1,6}
import { AssistantModal } from "@/components/assistant-ui/assistant-modal";
export default function Home() {
return (
);
}
```
Anatomy \[#anatomy]
The `AssistantModal` component is built with the following primitives:
```tsx
import { AssistantModalPrimitive } from "@assistant-ui/react";
{/* Thread component goes here */}
```
API Reference \[#api-reference]
Root \[#root]
Contains all parts of the modal. Based on Radix UI Popover.
void",
description: "Callback when the open state changes.",
},
{
name: "unstable_openOnRunStart",
type: "boolean",
description: "Automatically open the modal when the assistant starts running.",
},
]}
/>
Trigger \[#trigger]
A button that toggles the modal open/closed state.
This primitive renders a `