# LocalRuntime
URL: /docs/runtimes/custom/local
Quickest path to a working chat. Handles state while you handle the API.
Overview \[#overview]
`LocalRuntime` is the simplest way to connect your own custom backend to assistant-ui. It manages all chat state internally while providing a clean adapter interface to connect with any REST API, OpenAI, or custom language model.
`LocalRuntime` provides:
* **Built-in state management** for messages, threads, and conversation history
* **Automatic features** like message editing, reloading, and branch switching
* **Multi-thread support** through [Assistant Cloud](/docs/cloud) or your own database using `useRemoteThreadListRuntime`
* **Simple adapter pattern** to connect any backend API
While LocalRuntime manages state in-memory by default, it offers multiple persistence options through adapters - use the history adapter for single-thread persistence, Assistant Cloud for managed multi-thread support, or implement your own storage with `useRemoteThreadListRuntime`.
When to Use \[#when-to-use]
Use `LocalRuntime` if you need:
* **Quick setup with minimal configuration** - Get a fully functional chat interface with just a few lines of code
* **Built-in state management** - No need to manage messages, threads, or conversation history yourself
* **Automatic features** - Branch switching, message editing, and regeneration work out of the box
* **API flexibility** - Connect to any REST endpoint, OpenAI, or custom model with a simple adapter
* **Multi-thread support** - Full thread management with Assistant Cloud or custom database
* **Thread persistence** - Via history adapter, Assistant Cloud, or custom thread list adapter
Key Features \[#key-features]
Getting Started \[#getting-started]
Create a Next.js project \[#create-a-nextjs-project]
```sh
npx create-next-app@latest my-app
cd my-app
```
Install `@assistant-ui/react` \[#install-assistant-uireact]
Add `assistant-ui` Thread component \[#add-assistant-ui-thread-component]
```sh npm2yarn
npx assistant-ui@latest add thread
```
Define a `MyRuntimeProvider` component \[#define-a-myruntimeprovider-component]
Update the `MyModelAdapter` below to integrate with your own custom API.
See `LocalRuntimeOptions` [API Reference](#localruntimeoptions) for available configuration options.
```tsx twoslash include MyRuntimeProvider title="app/MyRuntimeProvider.tsx"
// @filename: /app/MyRuntimeProvider.tsx
// ---cut---
"use client";
import type { ReactNode } from "react";
import {
AssistantRuntimeProvider,
useLocalRuntime,
type ChatModelAdapter,
} from "@assistant-ui/react";
const MyModelAdapter: ChatModelAdapter = {
async run({ messages, abortSignal }) {
// TODO replace with your own API
const result = await fetch("", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
// forward the messages in the chat to the API
body: JSON.stringify({
messages,
}),
// if the user hits the "cancel" button or escape keyboard key, cancel the request
signal: abortSignal,
});
const data = await result.json();
return {
content: [
{
type: "text",
text: data.text,
},
],
};
},
};
export function MyRuntimeProvider({
children,
}: Readonly<{
children: ReactNode;
}>) {
const runtime = useLocalRuntime(MyModelAdapter);
return (
{children}
);
}
```
Wrap your app in `MyRuntimeProvider` \[#wrap-your-app-in-myruntimeprovider]
```tsx {1,11,17} twoslash title="app/layout.tsx"
// @include: MyRuntimeProvider
// @filename: /app/layout.tsx
// ---cut---
import type { ReactNode } from "react";
import { MyRuntimeProvider } from "@/app/MyRuntimeProvider";
export default function RootLayout({
children,
}: Readonly<{
children: ReactNode;
}>) {
return (
{children}
);
}
```
Use the Thread component \[#use-the-thread-component]
```tsx title="app/page.tsx"
import { Thread } from 'components/assistant-ui/thread.tsx'
export default function Page() {
return ;
}
```
Streaming Responses \[#streaming-responses]
Implement streaming by declaring the `run` function as an `AsyncGenerator`.
```tsx twoslash {2, 11-13} title="app/MyRuntimeProvider.tsx"
import {
ChatModelAdapter,
ThreadMessage,
type ModelContext,
} from "@assistant-ui/react";
import { OpenAI } from "openai";
const openai = new OpenAI();
const backendApi = async ({
messages,
abortSignal,
context,
}: {
messages: readonly ThreadMessage[];
abortSignal: AbortSignal;
context: ModelContext;
}) => {
return openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Say this is a test" }],
stream: true,
});
};
// ---cut---
const MyModelAdapter: ChatModelAdapter = {
async *run({ messages, abortSignal, context }) {
const stream = await backendApi({ messages, abortSignal, context });
let text = "";
for await (const part of stream) {
text += part.choices[0]?.delta?.content || "";
yield {
content: [{ type: "text", text }],
};
}
},
};
```
Streaming with Tool Calls \[#streaming-with-tool-calls]
Handle streaming responses that include function calls:
```tsx
const MyModelAdapter: ChatModelAdapter = {
async *run({ messages, abortSignal, context }) {
const stream = await openai.chat.completions.create({
model: "gpt-4o",
messages: convertToOpenAIMessages(messages),
tools: context.tools,
stream: true,
signal: abortSignal,
});
let content = "";
const toolCalls: any[] = [];
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta;
// Handle text content
if (delta?.content) {
content += delta.content;
}
// Handle tool calls
if (delta?.tool_calls) {
for (const toolCall of delta.tool_calls) {
if (!toolCalls[toolCall.index]) {
toolCalls[toolCall.index] = {
id: toolCall.id,
type: "function",
function: { name: "", arguments: "" },
};
}
if (toolCall.function?.name) {
toolCalls[toolCall.index].function.name = toolCall.function.name;
}
if (toolCall.function?.arguments) {
toolCalls[toolCall.index].function.arguments +=
toolCall.function.arguments;
}
}
}
// Yield current state
yield {
content: [
...(content ? [{ type: "text" as const, text: content }] : []),
...toolCalls.map((tc) => ({
type: "tool-call" as const,
toolCallId: tc.id,
toolName: tc.function.name,
args: JSON.parse(tc.function.arguments || "{}"),
})),
],
};
}
},
};
```
Tool Calling \[#tool-calling]
`LocalRuntime` supports OpenAI-compatible function calling with automatic or human-in-the-loop execution.
Basic Tool Definition \[#basic-tool-definition]
Tools should be registered using the `Tools()` API with `useAui()`:
```tsx
import { useAui, Tools, type Toolkit } from "@assistant-ui/react";
import { z } from "zod";
// Define your toolkit
const myToolkit: Toolkit = {
getWeather: {
description: "Get the current weather in a location",
parameters: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]).default("celsius"),
}),
execute: async ({ location, unit }) => {
const weather = await fetchWeatherAPI(location, unit);
return weather;
},
},
};
// Register tools in your runtime provider
function MyRuntimeProvider({ children }: { children: React.ReactNode }) {
const runtime = useLocalRuntime(MyModelAdapter);
// Register all tools
const aui = useAui({
tools: Tools({ toolkit: myToolkit }),
});
return (
{children}
);
}
```
The tools will be available to your adapter via the `context` parameter in the `run` function. See the [Tools guide](/docs/guides/tools) for more details on tool registration and advanced features.
Human-in-the-Loop Approval \[#human-in-the-loop-approval]
Require user confirmation before executing certain tools:
```tsx
const runtime = useLocalRuntime(MyModelAdapter, {
unstable_humanToolNames: ["delete_file", "send_email"],
});
```
Tool Execution \[#tool-execution]
Tools are executed automatically by the runtime. The model adapter receives tool results in subsequent messages:
```tsx
// Messages will include tool calls and results:
[
{ role: "user", content: "What's the weather in SF?" },
{
role: "assistant",
content: [
{
type: "tool-call",
toolCallId: "call_123",
toolName: "get_weather",
args: { location: "San Francisco, CA" },
},
],
},
{
role: "tool",
content: [
{
type: "tool-result",
toolCallId: "call_123",
result: { temperature: 72, condition: "sunny" },
},
],
},
{
role: "assistant",
content: "The weather in San Francisco is sunny and 72°F.",
},
];
```
Multi-Thread Support \[#multi-thread-support]
`LocalRuntime` supports multiple conversation threads through two approaches:
1\. Assistant Cloud Integration \[#1-assistant-cloud-integration]
```tsx
import { useLocalRuntime } from "@assistant-ui/react";
import { AssistantCloud } from "assistant-cloud";
const cloud = new AssistantCloud({
apiKey: process.env.ASSISTANT_CLOUD_API_KEY,
});
const runtime = useLocalRuntime(MyModelAdapter, {
cloud, // Enables multi-thread support
});
```
With Assistant Cloud, you get:
* Multiple conversation threads
* Thread persistence across sessions
* Thread management (create, switch, rename, archive, delete)
* Automatic synchronization across devices
* Built-in user authentication
2\. Custom Database with useRemoteThreadListRuntime \[#2-custom-database-with-useremotethreadlistruntime]
For custom thread storage, use `useRemoteThreadListRuntime` with your own adapter:
```tsx
import {
useRemoteThreadListRuntime,
useAui,
RuntimeAdapterProvider,
AssistantRuntimeProvider,
type RemoteThreadListAdapter,
type ThreadHistoryAdapter,
} from "@assistant-ui/react";
import { createAssistantStream } from "assistant-stream";
import { useMemo } from "react";
// Implement your custom adapter with proper message persistence
const myDatabaseAdapter: RemoteThreadListAdapter = {
async list() {
const threads = await db.threads.findAll();
return {
threads: threads.map((t) => ({
status: t.archived ? "archived" : "regular",
remoteId: t.id,
title: t.title,
})),
};
},
async initialize(threadId) {
const thread = await db.threads.create({ id: threadId });
return { remoteId: thread.id };
},
async rename(remoteId, newTitle) {
await db.threads.update(remoteId, { title: newTitle });
},
async archive(remoteId) {
await db.threads.update(remoteId, { archived: true });
},
async unarchive(remoteId) {
await db.threads.update(remoteId, { archived: false });
},
async delete(remoteId) {
// Delete thread and its messages
await db.messages.deleteByThreadId(remoteId);
await db.threads.delete(remoteId);
},
async generateTitle(remoteId, unstable_messages) {
// Generate title from messages using your AI
const newTitle = await generateTitle(unstable_messages);
// Persist the title in your DB
await db.threads.update(remoteId, { title: newTitle });
// IMPORTANT: Return an AssistantStream so the UI updates
return createAssistantStream((controller) => {
controller.appendText(newTitle);
controller.close();
});
},
};
// Complete implementation with message persistence using Provider pattern
export function MyRuntimeProvider({ children }) {
const runtime = useRemoteThreadListRuntime({
runtimeHook: () => {
return useLocalRuntime(MyModelAdapter);
},
adapter: {
...myDatabaseAdapter,
// The Provider component adds thread-specific adapters
unstable_Provider: ({ children }) => {
// This runs in the context of each thread
const aui = useAui();
// Create thread-specific history adapter
const history = useMemo(
() => ({
async load() {
const { remoteId } = aui.threadListItem().getState();
if (!remoteId) return { messages: [] };
const rows = await db.messages.findByThreadId(remoteId);
return {
messages: rows.map((row) => {
const common = {
id: row.id,
createdAt: new Date(row.createdAt),
};
// `content` is stored as JSON — parse back into message parts
const content = JSON.parse(row.content);
if (row.role === "user") {
return {
parentId: row.parentId,
message: {
...common,
role: "user" as const,
content,
attachments: [],
metadata: { custom: {} },
},
};
}
if (row.role === "assistant") {
return {
parentId: row.parentId,
message: {
...common,
role: "assistant" as const,
content,
status: { type: "complete", reason: "stop" } as const,
metadata: {
custom: {},
unstable_state: null,
unstable_annotations: [],
unstable_data: [],
steps: [],
},
},
};
}
return {
parentId: row.parentId,
message: {
...common,
role: "system" as const,
content,
metadata: { custom: {} },
},
};
}),
};
},
async append({ message, parentId }) {
// Wait for initialization to get remoteId (safe to call multiple times)
const { remoteId } = await aui.threadListItem().initialize();
await db.messages.create({
threadId: remoteId,
parentId,
id: message.id,
role: message.role,
content: JSON.stringify(message.content),
createdAt: message.createdAt,
});
},
}),
[aui],
);
const adapters = useMemo(() => ({ history }), [history]);
return (
{children}
);
},
},
});
return (
{children}
);
}
```
The `generateTitle` method must return an AssistantStream{" "}
containing the title text. The easiest, type-safe way is to use{" "}
createAssistantStream and call{" "}
controller.appendText(newTitle) followed by{" "}
controller.close(). Returning a raw ReadableStream{" "}
won't update the thread list UI.
Understanding the Architecture \[#understanding-the-architecture]
**Key Insight**: The `unstable_Provider` component in your adapter runs in the
context of each thread, giving you access to thread-specific information like
`remoteId`. This is where you add the history adapter for message persistence.
The complete multi-thread implementation requires:
1. **RemoteThreadListAdapter** - Manages thread metadata (list, create, rename, archive, delete)
2. **unstable\_Provider** - Component that provides thread-specific adapters (like history)
3. **ThreadHistoryAdapter** - Persists messages for each thread (load, append)
4. **runtimeHook** - Creates a basic `LocalRuntime` (adapters are added by Provider)
Without the history adapter, threads would have no message persistence, making them effectively useless. The Provider pattern allows you to add thread-specific functionality while keeping the runtime creation simple.
When implementing a history adapter, `append()` may be called before the thread is fully initialized, causing the first message to be lost. Instead of checking `if (!remoteId)`, await initialization to ensure the `remoteId` is available:
```tsx
import { useAui } from "@assistant-ui/react";
// Inside your unstable_Provider component
const aui = useAui();
const history = useMemo(
() => ({
async append({ message, parentId }) {
// Wait for initialization - safe to call multiple times
const { remoteId } = await aui.threadListItem().initialize();
await db.messages.create({ threadId: remoteId, parentId, ...message });
},
// ...
}),
[aui],
);
```
See `AssistantCloudThreadHistoryAdapter` in the source for a production reference.
Database Schema Example \[#database-schema-example]
```typescript
// Example database schema for thread persistence
interface ThreadRecord {
id: string;
title: string;
archived: boolean;
createdAt: Date;
updatedAt: Date;
}
interface MessageRecord {
id: string;
threadId: string;
parentId: string | null;
role: "user" | "assistant" | "system";
content: string; // JSON-encoded message content parts
createdAt: Date;
}
```
Both approaches provide full multi-thread support. Choose Assistant Cloud for a managed solution or implement your own adapter for custom storage requirements.
Adapters \[#adapters]
Extend `LocalRuntime` capabilities with adapters. The runtime automatically enables/disables UI features based on which adapters are provided.
Attachment Adapter \[#attachment-adapter]
Enable file and image uploads:
```tsx
const attachmentAdapter: AttachmentAdapter = {
accept: "image/*,application/pdf",
async add({ file }) {
const formData = new FormData();
formData.append("file", file);
const response = await fetch("/api/upload", {
method: "POST",
body: formData,
});
const { id, url } = await response.json();
return {
id,
type: file.type.startsWith("image/") ? "image" : "document",
name: file.name,
contentType: file.type,
file,
url,
status: { type: "requires-action", reason: "composer-send" },
};
},
async send(attachment) {
return {
...attachment,
status: { type: "complete" },
content: [
attachment.type === "image"
? { type: "image", image: attachment.url }
: { type: "text", text: `[${attachment.name}](${attachment.url})` },
],
};
},
async remove(attachment) {
await fetch(`/api/upload/${attachment.id}`, {
method: "DELETE",
});
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { attachments: attachmentAdapter },
});
// For multiple file types, use CompositeAttachmentAdapter:
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: {
attachments: new CompositeAttachmentAdapter([
new SimpleImageAttachmentAdapter(),
new SimpleTextAttachmentAdapter(),
customPDFAdapter,
]),
},
});
```
Thread History Adapter \[#thread-history-adapter]
Persist and resume conversations:
```tsx
const historyAdapter: ThreadHistoryAdapter = {
async load() {
// Load messages from your storage.
// The API must return `{ messages: { parentId, message }[] }`
// where each `message` is a full ThreadMessage
// (including `metadata.custom`, plus `attachments` on user messages
// and `status` + the rest of `metadata` on assistant messages).
const response = await fetch(`/api/thread/current`);
return await response.json();
},
async append({ message, parentId }) {
// Save new message to storage
await fetch(`/api/thread/messages`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message, parentId }),
});
},
// Optional: Resume interrupted conversations
async resume({ messages }) {
const lastMessage = messages[messages.length - 1];
if (lastMessage?.role === "user") {
// Resume generating assistant response
const response = await fetch("/api/chat/resume", {
method: "POST",
body: JSON.stringify({ messages }),
});
return response.body; // Return stream
}
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { history: historyAdapter },
});
```
The history adapter handles persistence for the current thread's messages. For
multi-thread support with custom storage, use either
`useRemoteThreadListRuntime` with `LocalRuntime` or `ExternalStoreRuntime`
with a thread list adapter.
Speech Synthesis Adapter \[#speech-synthesis-adapter]
Add text-to-speech capabilities:
```tsx
const speechAdapter: SpeechSynthesisAdapter = {
speak(text) {
const utterance = new SpeechSynthesisUtterance(text);
utterance.rate = 1.0;
utterance.pitch = 1.0;
const subscribers = new Set<() => void>();
const result: SpeechSynthesisAdapter.Utterance = {
status: { type: "running" },
cancel: () => {
speechSynthesis.cancel();
result.status = { type: "ended", reason: "cancelled" };
subscribers.forEach((cb) => cb());
},
subscribe: (callback) => {
subscribers.add(callback);
return () => subscribers.delete(callback);
},
};
utterance.addEventListener("end", () => {
result.status = { type: "ended", reason: "finished" };
subscribers.forEach((cb) => cb());
});
utterance.addEventListener("error", (e) => {
result.status = { type: "ended", reason: "error", error: e.error };
subscribers.forEach((cb) => cb());
});
speechSynthesis.speak(utterance);
return result;
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { speech: speechAdapter },
});
```
Feedback Adapter \[#feedback-adapter]
Collect user feedback on messages:
```tsx
const feedbackAdapter: FeedbackAdapter = {
async submit(feedback) {
await fetch("/api/feedback", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messageId: feedback.message.id,
rating: feedback.type, // "positive" or "negative"
}),
});
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { feedback: feedbackAdapter },
});
```
Suggestion Adapter \[#suggestion-adapter]
Provide follow-up suggestions:
```tsx
const suggestionAdapter: SuggestionAdapter = {
async *generate({ messages }) {
// Analyze conversation context
const lastMessage = messages[messages.length - 1];
// Generate suggestions
const suggestions = await generateSuggestions(lastMessage);
yield suggestions.map((prompt) => ({
prompt,
}));
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { suggestion: suggestionAdapter },
});
```
Advanced Features \[#advanced-features]
Resuming a Run \[#resuming-a-run]
`resumeRun` reconnects to an in-progress or interrupted assistant run. This is essential for scenarios like:
* **Page refresh** while the backend is still generating a response
* **Network reconnection** after a temporary disconnect
* **Tab backgrounding** where the browser suspended the WebSocket connection
* **Thread switching** to a conversation that has an active backend stream
How it works \[#how-it-works]
When you call `resumeRun`, the local runtime:
1. Creates a new assistant message in the thread (or continues the existing one)
2. Calls your provided `stream` function with `ChatModelRunOptions` (messages, abort signal, model context, etc.)
3. Iterates over each `ChatModelRunResult` yielded by the stream, updating the assistant message with new content, status, and metadata
4. Completes when the stream finishes or is cancelled
Unlike `startRun` (which uses the ChatModelAdapter), `resumeRun` **requires** a `stream` parameter — you provide the async generator that produces the response.
Basic example \[#basic-example]
```tsx
import { useAui } from "@assistant-ui/react";
import type { ChatModelRunResult } from "@assistant-ui/core";
const aui = useAui();
// Create a custom stream
async function* createCustomStream(): AsyncGenerator {
let text = "Initial response";
yield {
content: [{ type: "text", text }],
};
// Simulate delay
await new Promise((resolve) => setTimeout(resolve, 500));
text = "Initial response. And here's more content...";
yield {
content: [{ type: "text", text }],
};
}
// Resume a run with the custom stream
aui.thread().resumeRun({
parentId: "message-id", // ID of the message to respond to
stream: createCustomStream,
});
```
Reconnecting to a backend stream \[#reconnecting-to-a-backend-stream]
A common pattern is checking whether the backend is still running, then reconnecting:
```tsx
import { useAui } from "@assistant-ui/react";
import { useEffect, useRef } from "react";
function useStreamReconnect(threadId: string) {
const aui = useAui();
const hasCheckedRef = useRef(false);
useEffect(() => {
if (hasCheckedRef.current) return;
hasCheckedRef.current = true;
const checkAndResume = async () => {
// Check if the backend still has an active stream
const status = await fetch(`/api/status/${threadId}`).then((r) =>
r.json(),
);
if (status.isRunning) {
const parentId =
aui.thread().getState().messages.at(-1)?.id ?? null;
aui.thread().resumeRun({ parentId });
}
};
checkAndResume();
}, [aui, threadId]);
}
```
ChatModelRunResult \[#chatmodelrunresult]
Each value yielded by the stream is a `ChatModelRunResult`:
```tsx
type ChatModelRunResult = {
/** The message content parts (text, tool calls, etc.) */
content: ThreadAssistantContentPart[];
/** Optional status override */
status?: MessageStatus;
/** Optional metadata (state, annotations, custom fields) */
metadata?: {
custom?: Record;
steps?: unknown[];
// ...
};
};
```
The stream should yield the **full cumulative content** on each iteration (not deltas). Each yield replaces the previous content of the assistant message.
Custom Thread Management \[#custom-thread-management]
Access thread actions for advanced control with `useAui`:
```tsx
import { useAui } from "@assistant-ui/react";
function MyComponent() {
const aui = useAui();
// Cancel current generation
const handleCancel = () => {
aui.thread().cancelRun();
};
// Switch to a different branch (message scope)
// aui.message().switchToBranch({ position: "next" });
// aui.message().switchToBranch({ position: "previous" });
// Reload a message (message scope)
// aui.message().reload();
return (
// Your UI
);
}
```
Integration Examples \[#integration-examples]
OpenAI Integration \[#openai-integration]
```tsx
import { OpenAI } from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
dangerouslyAllowBrowser: true, // Use server-side in production
});
const OpenAIAdapter: ChatModelAdapter = {
async *run({ messages, abortSignal, context }) {
const stream = await openai.chat.completions.create({
model: "gpt-4o",
messages: messages.map((m) => ({
role: m.role,
content: m.content
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n"),
})),
stream: true,
signal: abortSignal,
});
let fullText = "";
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
fullText += content;
yield {
content: [{ type: "text", text: fullText }],
};
}
}
},
};
```
Custom REST API Integration \[#custom-rest-api-integration]
```tsx
const CustomAPIAdapter: ChatModelAdapter = {
async run({ messages, abortSignal, unstable_threadId }) {
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messages: messages.map((m) => ({
role: m.role,
content: m.content,
})),
threadId: unstable_threadId, // Pass thread ID to your backend
}),
signal: abortSignal,
});
if (!response.ok) {
throw new Error(`API error: ${response.statusText}`);
}
const data = await response.json();
return {
content: [{ type: "text", text: data.message }],
};
},
};
```
Best Practices \[#best-practices]
1. **Error Handling** - Always handle API errors gracefully:
```tsx
async *run({ messages, abortSignal }) {
try {
const response = await fetchAPI(messages, abortSignal);
yield response;
} catch (error) {
if (error.name === 'AbortError') {
// User cancelled - this is normal
return;
}
// Re-throw other errors to display in UI
throw error;
}
}
```
2. **Abort Signal** - Always pass the abort signal to fetch requests:
```tsx
fetch(url, { signal: abortSignal });
```
3. **Memory Management** - For long conversations, consider implementing message limits:
```tsx
const recentMessages = messages.slice(-20); // Keep last 20 messages
```
4. **Type Safety** - Use TypeScript for better development experience:
```tsx
import type { ChatModelAdapter, ThreadMessage } from "@assistant-ui/react";
```
Comparison with `ExternalStoreRuntime` \[#comparison-with-externalstoreruntime]
| Feature | `LocalRuntime` | `ExternalStoreRuntime` |
| --------------------- | -------------------------------------------- | ------------------------------------------------ |
| State Management | Built-in | You manage |
| Setup Complexity | Simple | More complex |
| Flexibility | Extensible via adapters | Full control |
| Message Editing | Automatic | Requires `onEdit` handler |
| Branch Switching | Automatic | Requires `setMessages` handler |
| Multi-Thread Support | Yes (with Assistant Cloud or custom adapter) | Yes (with thread list adapter) |
| Custom Thread Storage | Yes (with useRemoteThreadListRuntime) | Yes |
| Persistence | Via history adapter or Assistant Cloud | Your implementation |
| Best For | Quick prototypes, standard apps, cloud-based | Complex state requirements, custom storage needs |
Troubleshooting \[#troubleshooting]
Common Issues \[#common-issues]
**Messages not appearing**: Ensure your adapter returns the correct format:
```tsx
return {
content: [{ type: "text", text: "response" }]
};
```
**Streaming not working**: Make sure to use `async *run` (note the asterisk):
```tsx
async *run({ messages }) { // ✅ Correct
async run({ messages }) { // ❌ Wrong for streaming
```
Tool UI Flickers or Disappears During Streaming \[#tool-ui-flickers-or-disappears-during-streaming]
A common issue when implementing a streaming `ChatModelAdapter` is seeing a tool's UI appear for a moment and then disappear. This is caused by failing to accumulate the `tool_calls` correctly across multiple stream chunks. State must be stored **outside** the streaming loop to persist.
**❌ Incorrect: Forgetting Previous Tool Calls**
This implementation incorrectly re-creates the `content` array for every chunk. If a later chunk contains only text, tool calls from previous chunks are lost, causing the UI to disappear.
```tsx
// This implementation incorrectly re-creates the `content` array for every chunk.
// If a later chunk contains only text, tool calls from previous chunks are lost.
async *run({ messages, abortSignal, context }) {
const stream = await backendApi({ messages, abortSignal, context });
let text = "";
for await (const chunk of stream) {
// ❌ DON'T: This overwrites toolCalls with only the current chunk's data
const toolCalls = chunk.tool_calls || [];
const content = [{ type: "text", text }];
for (const toolCall of toolCalls) {
content.push({
type: "tool-call",
toolName: toolCall.name,
toolCallId: toolCall.id,
args: toolCall.args,
});
}
yield { content }; // This yield might not contain the tool call anymore
}
}
```
**✅ Correct: Accumulating State**
This implementation uses a `Map` outside the loop to remember all tool calls.
```tsx
// This implementation uses a Map outside the loop to remember all tool calls.
async *run({ messages, abortSignal, context }) {
const stream = await backendApi({ messages, abortSignal, context });
let text = "";
// ✅ DO: Declare state outside the loop
const toolCallsMap = new Map();
for await (const chunk of stream) {
text += chunk.content || "";
// ✅ DO: Add/update tool calls in the persistent map
for (const toolCall of chunk.tool_calls || []) {
toolCallsMap.set(toolCall.toolCallId, {
type: "tool-call",
toolName: toolCall.name,
toolCallId: toolCall.toolCallId,
args: toolCall.args,
});
}
// ✅ DO: Build content from accumulated state
const content = [
...(text ? [{ type: "text", text }] : []),
...Array.from(toolCallsMap.values()),
];
yield { content }; // Yield the complete, correct state every time
}
}
```
Debug Tips \[#debug-tips]
1. **Log adapter calls** to trace execution:
```tsx
async *run(options) {
console.log("Adapter called with:", options);
// ... rest of implementation
}
```
2. **Check network requests** in browser DevTools
3. **Verify message format** matches ThreadMessage structure
API Reference \[#api-reference]
`ChatModelAdapter` \[#chatmodeladapter]
The main interface for connecting your API to `LocalRuntime`.
`ChatModelRunOptions` \[#chatmodelrunoptions]
Parameters passed to the `run` function.
`LocalRuntimeOptions` \[#localruntimeoptions]
Configuration options for the `LocalRuntime`.
`RemoteThreadListAdapter` \[#remotethreadlistadapter]
Interface for implementing custom thread list storage.
Related Runtime APIs \[#related-runtime-apis]
* [AssistantRuntime API](/docs/api-reference/runtimes/assistant-runtime) - Core runtime interface and methods
* [ThreadRuntime API](/docs/api-reference/runtimes/thread-runtime) - Thread-specific operations and state management
Related Resources \[#related-resources]
* [Pick a Runtime Guide](/docs/runtimes/pick-a-runtime)
* [`ExternalStoreRuntime`](/docs/runtimes/custom/external-store)
* [Examples Repository](https://github.com/assistant-ui/assistant-ui/tree/main/examples)