# LocalRuntime
URL: /docs/runtimes/custom/local
Quickest path to a working chat. Handles state while you handle the API.
***
title: LocalRuntime
description: Quickest path to a working chat. Handles state while you handle the API.
-------------------------------------------------------------------------------------
import { ParametersTable } from "@/components/docs/tables/ParametersTable";
import { InstallCommand } from "@/components/docs/fumadocs/install/install-command";
## Overview
`LocalRuntime` is the simplest way to connect your own custom backend to assistant-ui. It manages all chat state internally while providing a clean adapter interface to connect with any REST API, OpenAI, or custom language model.
`LocalRuntime` provides:
* **Built-in state management** for messages, threads, and conversation history
* **Automatic features** like message editing, reloading, and branch switching
* **Multi-thread support** through [Assistant Cloud](/docs/cloud/overview) or your own database using `useRemoteThreadListRuntime`
* **Simple adapter pattern** to connect any backend API
While LocalRuntime manages state in-memory by default, it offers multiple persistence options through adapters - use the history adapter for single-thread persistence, Assistant Cloud for managed multi-thread support, or implement your own storage with `useRemoteThreadListRuntime`.
## When to Use
Use `LocalRuntime` if you need:
* **Quick setup with minimal configuration** - Get a fully functional chat interface with just a few lines of code
* **Built-in state management** - No need to manage messages, threads, or conversation history yourself
* **Automatic features** - Branch switching, message editing, and regeneration work out of the box
* **API flexibility** - Connect to any REST endpoint, OpenAI, or custom model with a simple adapter
* **Multi-thread support** - Full thread management with Assistant Cloud or custom database
* **Thread persistence** - Via history adapter, Assistant Cloud, or custom thread list adapter
## Key Features
## Getting Started
### Create a Next.js project
```sh
npx create-next-app@latest my-app
cd my-app
```
### Install `@assistant-ui/react`
### Add `assistant-ui` Thread component
```sh npm2yarn
npx assistant-ui@latest add thread
```
### Define a `MyRuntimeProvider` component
Update the `MyModelAdapter` below to integrate with your own custom API.
See `LocalRuntimeOptions` [API Reference](#localruntimeoptions) for available configuration options.
```tsx twoslash include MyRuntimeProvider title="app/MyRuntimeProvider.tsx"
// @filename: /app/MyRuntimeProvider.tsx
// ---cut---
"use client";
import type { ReactNode } from "react";
import {
AssistantRuntimeProvider,
useLocalRuntime,
type ChatModelAdapter,
} from "@assistant-ui/react";
const MyModelAdapter: ChatModelAdapter = {
async run({ messages, abortSignal }) {
// TODO replace with your own API
const result = await fetch("", {
method: "POST",
headers: {
"Content-Type": "application/json",
},
// forward the messages in the chat to the API
body: JSON.stringify({
messages,
}),
// if the user hits the "cancel" button or escape keyboard key, cancel the request
signal: abortSignal,
});
const data = await result.json();
return {
content: [
{
type: "text",
text: data.text,
},
],
};
},
};
export function MyRuntimeProvider({
children,
}: Readonly<{
children: ReactNode;
}>) {
const runtime = useLocalRuntime(MyModelAdapter);
return (
{children}
);
}
```
### Wrap your app in `MyRuntimeProvider`
```tsx {1,11,17} twoslash title="app/layout.tsx"
// @include: MyRuntimeProvider
// @filename: /app/layout.tsx
// ---cut---
import type { ReactNode } from "react";
import { MyRuntimeProvider } from "@/app/MyRuntimeProvider";
export default function RootLayout({
children,
}: Readonly<{
children: ReactNode;
}>) {
return (
{children}
);
}
```
### Use the Thread component
```tsx title="app/page.tsx"
import { Thread } from 'components/assistant-ui/thread.tsx'
export default function Page() {
return ;
}
```
## Streaming Responses
Implement streaming by declaring the `run` function as an `AsyncGenerator`.
```tsx twoslash {2, 11-13} title="app/MyRuntimeProvider.tsx"
import {
ChatModelAdapter,
ThreadMessage,
type ModelContext,
} from "@assistant-ui/react";
import { OpenAI } from "openai";
const openai = new OpenAI();
const backendApi = async ({
messages,
abortSignal,
context,
}: {
messages: readonly ThreadMessage[];
abortSignal: AbortSignal;
context: ModelContext;
}) => {
return openai.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "Say this is a test" }],
stream: true,
});
};
// ---cut---
const MyModelAdapter: ChatModelAdapter = {
async *run({ messages, abortSignal, context }) {
const stream = await backendApi({ messages, abortSignal, context });
let text = "";
for await (const part of stream) {
text += part.choices[0]?.delta?.content || "";
yield {
content: [{ type: "text", text }],
};
}
},
};
```
### Streaming with Tool Calls
Handle streaming responses that include function calls:
```tsx
const MyModelAdapter: ChatModelAdapter = {
async *run({ messages, abortSignal, context }) {
const stream = await openai.chat.completions.create({
model: "gpt-4o",
messages: convertToOpenAIMessages(messages),
tools: context.tools,
stream: true,
signal: abortSignal,
});
let content = "";
const toolCalls: any[] = [];
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta;
// Handle text content
if (delta?.content) {
content += delta.content;
}
// Handle tool calls
if (delta?.tool_calls) {
for (const toolCall of delta.tool_calls) {
if (!toolCalls[toolCall.index]) {
toolCalls[toolCall.index] = {
id: toolCall.id,
type: "function",
function: { name: "", arguments: "" },
};
}
if (toolCall.function?.name) {
toolCalls[toolCall.index].function.name = toolCall.function.name;
}
if (toolCall.function?.arguments) {
toolCalls[toolCall.index].function.arguments +=
toolCall.function.arguments;
}
}
}
// Yield current state
yield {
content: [
...(content ? [{ type: "text" as const, text: content }] : []),
...toolCalls.map((tc) => ({
type: "tool-call" as const,
toolCallId: tc.id,
toolName: tc.function.name,
args: JSON.parse(tc.function.arguments || "{}"),
})),
],
};
}
},
};
```
## Tool Calling
`LocalRuntime` supports OpenAI-compatible function calling with automatic or human-in-the-loop execution.
### Basic Tool Definition
Tools should be registered using the `Tools()` API with `useAui()`:
```tsx
import { useAui, Tools, type Toolkit } from "@assistant-ui/react";
import { z } from "zod";
// Define your toolkit
const myToolkit: Toolkit = {
getWeather: {
description: "Get the current weather in a location",
parameters: z.object({
location: z.string().describe("The city and state, e.g. San Francisco, CA"),
unit: z.enum(["celsius", "fahrenheit"]).default("celsius"),
}),
execute: async ({ location, unit }) => {
const weather = await fetchWeatherAPI(location, unit);
return weather;
},
},
};
// Register tools in your runtime provider
function MyRuntimeProvider({ children }: { children: React.ReactNode }) {
const runtime = useLocalRuntime(MyModelAdapter);
// Register all tools
const aui = useAui({
tools: Tools({ toolkit: myToolkit }),
});
return (
{children}
);
}
```
The tools will be available to your adapter via the `context` parameter in the `run` function. See the [Tools guide](/docs/guides/tools) for more details on tool registration and advanced features.
### Human-in-the-Loop Approval
Require user confirmation before executing certain tools:
```tsx
const runtime = useLocalRuntime(MyModelAdapter, {
unstable_humanToolNames: ["delete_file", "send_email"],
});
```
### Tool Execution
Tools are executed automatically by the runtime. The model adapter receives tool results in subsequent messages:
```tsx
// Messages will include tool calls and results:
[
{ role: "user", content: "What's the weather in SF?" },
{
role: "assistant",
content: [
{
type: "tool-call",
toolCallId: "call_123",
toolName: "get_weather",
args: { location: "San Francisco, CA" },
},
],
},
{
role: "tool",
content: [
{
type: "tool-result",
toolCallId: "call_123",
result: { temperature: 72, condition: "sunny" },
},
],
},
{
role: "assistant",
content: "The weather in San Francisco is sunny and 72°F.",
},
];
```
## Multi-Thread Support
`LocalRuntime` supports multiple conversation threads through two approaches:
### 1. Assistant Cloud Integration
```tsx
import { useLocalRuntime } from "@assistant-ui/react";
import { AssistantCloud } from "assistant-cloud";
const cloud = new AssistantCloud({
apiKey: process.env.ASSISTANT_CLOUD_API_KEY,
});
const runtime = useLocalRuntime(MyModelAdapter, {
cloud, // Enables multi-thread support
});
```
With Assistant Cloud, you get:
* Multiple conversation threads
* Thread persistence across sessions
* Thread management (create, switch, rename, archive, delete)
* Automatic synchronization across devices
* Built-in user authentication
### 2. Custom Database with useRemoteThreadListRuntime
For custom thread storage, use `useRemoteThreadListRuntime` with your own adapter:
```tsx
import {
unstable_useRemoteThreadListRuntime as useRemoteThreadListRuntime,
useAui,
RuntimeAdapterProvider,
AssistantRuntimeProvider,
type unstable_RemoteThreadListAdapter as RemoteThreadListAdapter,
type ThreadHistoryAdapter,
} from "@assistant-ui/react";
import { createAssistantStream } from "assistant-stream";
import { useMemo } from "react";
// Implement your custom adapter with proper message persistence
const myDatabaseAdapter: RemoteThreadListAdapter = {
async list() {
const threads = await db.threads.findAll();
return {
threads: threads.map((t) => ({
status: t.archived ? "archived" : "regular",
remoteId: t.id,
title: t.title,
})),
};
},
async initialize(threadId) {
const thread = await db.threads.create({ id: threadId });
return { remoteId: thread.id };
},
async rename(remoteId, newTitle) {
await db.threads.update(remoteId, { title: newTitle });
},
async archive(remoteId) {
await db.threads.update(remoteId, { archived: true });
},
async unarchive(remoteId) {
await db.threads.update(remoteId, { archived: false });
},
async delete(remoteId) {
// Delete thread and its messages
await db.messages.deleteByThreadId(remoteId);
await db.threads.delete(remoteId);
},
async generateTitle(remoteId, messages) {
// Generate title from messages using your AI
const newTitle = await generateTitle(messages);
// Persist the title in your DB
await db.threads.update(remoteId, { title: newTitle });
// IMPORTANT: Return an AssistantStream so the UI updates
return createAssistantStream((controller) => {
controller.appendText(newTitle);
controller.close();
});
},
};
// Complete implementation with message persistence using Provider pattern
export function MyRuntimeProvider({ children }) {
const runtime = useRemoteThreadListRuntime({
runtimeHook: () => {
return useLocalRuntime(MyModelAdapter);
},
adapter: {
...myDatabaseAdapter,
// The Provider component adds thread-specific adapters
unstable_Provider: ({ children }) => {
// This runs in the context of each thread
const aui = useAui();
// Create thread-specific history adapter
const history = useMemo(
() => ({
async load() {
const { remoteId } = aui.threadListItem().getState();
if (!remoteId) return { messages: [] };
const messages = await db.messages.findByThreadId(remoteId);
return {
messages: messages.map((m) => ({
role: m.role,
content: m.content,
id: m.id,
createdAt: new Date(m.createdAt),
})),
};
},
async append(message) {
// Wait for initialization to get remoteId (safe to call multiple times)
const { remoteId } = await aui.threadListItem().initialize();
await db.messages.create({
threadId: remoteId,
role: message.role,
content: message.content,
id: message.id,
createdAt: message.createdAt,
});
},
}),
[aui],
);
const adapters = useMemo(() => ({ history }), [history]);
return (
{children}
);
},
},
});
return (
{children}
);
}
```
The `generateTitle` method must return an AssistantStream{" "}
containing the title text. The easiest, type-safe way is to use{" "}
createAssistantStream and call{" "}
controller.appendText(newTitle) followed by{" "}
controller.close(). Returning a raw ReadableStream{" "}
won't update the thread list UI.
#### Understanding the Architecture
**Key Insight**: The `unstable_Provider` component in your adapter runs in the
context of each thread, giving you access to thread-specific information like
`remoteId`. This is where you add the history adapter for message persistence.
The complete multi-thread implementation requires:
1. **RemoteThreadListAdapter** - Manages thread metadata (list, create, rename, archive, delete)
2. **unstable\_Provider** - Component that provides thread-specific adapters (like history)
3. **ThreadHistoryAdapter** - Persists messages for each thread (load, append)
4. **runtimeHook** - Creates a basic `LocalRuntime` (adapters are added by Provider)
Without the history adapter, threads would have no message persistence, making them effectively useless. The Provider pattern allows you to add thread-specific functionality while keeping the runtime creation simple.
When implementing a history adapter, `append()` may be called before the thread is fully initialized, causing the first message to be lost. Instead of checking `if (!remoteId)`, await initialization to ensure the `remoteId` is available:
```tsx
import { useAui } from "@assistant-ui/react";
// Inside your unstable_Provider component
const aui = useAui();
const history = useMemo(
() => ({
async append(message) {
// Wait for initialization - safe to call multiple times
const { remoteId } = await aui.threadListItem().initialize();
await db.messages.create({ threadId: remoteId, ...message });
},
// ...
}),
[aui],
);
```
See `AssistantCloudThreadHistoryAdapter` in the source for a production reference.
#### Database Schema Example
```typescript
// Example database schema for thread persistence
interface ThreadRecord {
id: string;
title: string;
archived: boolean;
createdAt: Date;
updatedAt: Date;
}
interface MessageRecord {
id: string;
threadId: string;
role: "user" | "assistant" | "system";
content: any; // Store as JSON
createdAt: Date;
}
```
Both approaches provide full multi-thread support. Choose Assistant Cloud for a managed solution or implement your own adapter for custom storage requirements.
## Adapters
Extend `LocalRuntime` capabilities with adapters. The runtime automatically enables/disables UI features based on which adapters are provided.
### Attachment Adapter
Enable file and image uploads:
```tsx
const attachmentAdapter: AttachmentAdapter = {
accept: "image/*,application/pdf",
async add(file) {
const formData = new FormData();
formData.append("file", file);
const response = await fetch("/api/upload", {
method: "POST",
body: formData,
});
const { id, url } = await response.json();
return {
id,
type: file.type.startsWith("image/") ? "image" : "document",
name: file.name,
url,
};
},
async remove(attachment) {
await fetch(`/api/upload/${attachment.id}`, {
method: "DELETE",
});
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { attachments: attachmentAdapter },
});
// For multiple file types, use CompositeAttachmentAdapter:
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: {
attachments: new CompositeAttachmentAdapter([
new SimpleImageAttachmentAdapter(),
new SimpleTextAttachmentAdapter(),
customPDFAdapter,
]),
},
});
```
### Thread History Adapter
Persist and resume conversations:
```tsx
const historyAdapter: ThreadHistoryAdapter = {
async load() {
// Load messages from your storage
const response = await fetch(`/api/thread/current`);
const { messages } = await response.json();
return { messages };
},
async append(message) {
// Save new message to storage
await fetch(`/api/thread/messages`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message }),
});
},
// Optional: Resume interrupted conversations
async resume({ messages }) {
const lastMessage = messages[messages.length - 1];
if (lastMessage?.role === "user") {
// Resume generating assistant response
const response = await fetch("/api/chat/resume", {
method: "POST",
body: JSON.stringify({ messages }),
});
return response.body; // Return stream
}
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { history: historyAdapter },
});
```
The history adapter handles persistence for the current thread's messages. For
multi-thread support with custom storage, use either
`useRemoteThreadListRuntime` with `LocalRuntime` or `ExternalStoreRuntime`
with a thread list adapter.
### Speech Synthesis Adapter
Add text-to-speech capabilities:
```tsx
const speechAdapter: SpeechSynthesisAdapter = {
speak(text) {
const utterance = new SpeechSynthesisUtterance(text);
utterance.rate = 1.0;
utterance.pitch = 1.0;
speechSynthesis.speak(utterance);
},
stop() {
speechSynthesis.cancel();
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { speech: speechAdapter },
});
```
### Feedback Adapter
Collect user feedback on messages:
```tsx
const feedbackAdapter: FeedbackAdapter = {
async submit(feedback) {
await fetch("/api/feedback", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messageId: feedback.messageId,
rating: feedback.type, // "positive" or "negative"
}),
});
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { feedback: feedbackAdapter },
});
```
### Suggestion Adapter
Provide follow-up suggestions:
```tsx
const suggestionAdapter: SuggestionAdapter = {
async *generate({ messages }) {
// Analyze conversation context
const lastMessage = messages[messages.length - 1];
// Generate suggestions
const suggestions = await generateSuggestions(lastMessage);
yield suggestions.map((text) => ({
id: crypto.randomUUID(),
text,
}));
},
};
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: { suggestion: suggestionAdapter },
});
```
## Advanced Features
### Resuming a Run
The `unstable_resumeRun` method is experimental and may change in future
releases.
Resume a conversation with a custom stream:
```tsx
import { useThreadRuntime, type ChatModelRunResult } from "@assistant-ui/react";
// Get the thread runtime
const thread = useThreadRuntime();
// Create a custom stream
async function* createCustomStream(): AsyncGenerator {
let text = "Initial response";
yield {
content: [{ type: "text", text }],
};
// Simulate delay
await new Promise((resolve) => setTimeout(resolve, 500));
text = "Initial response. And here's more content...";
yield {
content: [{ type: "text", text }],
};
}
// Resume a run with the custom stream
thread.unstable_resumeRun({
parentId: "message-id", // ID of the message to respond to
stream: createCustomStream(), // The stream to use for resuming
});
```
### Custom Thread Management
Access thread runtime for advanced control with `useThreadRuntime`:
```tsx
import { useThreadRuntime } from "@assistant-ui/react";
function MyComponent() {
const thread = useThreadRuntime();
// Cancel current generation
const handleCancel = () => {
thread.cancelRun();
};
// Switch to a different branch
const handleSwitchBranch = (messageId: string, branchIndex: number) => {
thread.switchToBranch(messageId, branchIndex);
};
// Reload a message
const handleReload = (messageId: string) => {
thread.reload(messageId);
};
return (
// Your UI
);
}
```
## Integration Examples
### OpenAI Integration
```tsx
import { OpenAI } from "openai";
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
dangerouslyAllowBrowser: true, // Use server-side in production
});
const OpenAIAdapter: ChatModelAdapter = {
async *run({ messages, abortSignal, context }) {
const stream = await openai.chat.completions.create({
model: "gpt-4o",
messages: messages.map((m) => ({
role: m.role,
content: m.content
.filter((c) => c.type === "text")
.map((c) => c.text)
.join("\n"),
})),
stream: true,
signal: abortSignal,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
yield {
content: [{ type: "text", text: content }],
};
}
}
},
};
```
### Custom REST API Integration
```tsx
const CustomAPIAdapter: ChatModelAdapter = {
async run({ messages, abortSignal, unstable_threadId }) {
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
messages: messages.map((m) => ({
role: m.role,
content: m.content,
})),
threadId: unstable_threadId, // Pass thread ID to your backend
}),
signal: abortSignal,
});
if (!response.ok) {
throw new Error(`API error: ${response.statusText}`);
}
const data = await response.json();
return {
content: [{ type: "text", text: data.message }],
};
},
};
```
## Best Practices
1. **Error Handling** - Always handle API errors gracefully:
```tsx
async *run({ messages, abortSignal }) {
try {
const response = await fetchAPI(messages, abortSignal);
yield response;
} catch (error) {
if (error.name === 'AbortError') {
// User cancelled - this is normal
return;
}
// Re-throw other errors to display in UI
throw error;
}
}
```
2. **Abort Signal** - Always pass the abort signal to fetch requests:
```tsx
fetch(url, { signal: abortSignal });
```
3. **Memory Management** - For long conversations, consider implementing message limits:
```tsx
const recentMessages = messages.slice(-20); // Keep last 20 messages
```
4. **Type Safety** - Use TypeScript for better development experience:
```tsx
import type { ChatModelAdapter, ThreadMessage } from "@assistant-ui/react";
```
## Comparison with `ExternalStoreRuntime`
| Feature | `LocalRuntime` | `ExternalStoreRuntime` |
| --------------------- | -------------------------------------------- | ------------------------------------------------ |
| State Management | Built-in | You manage |
| Setup Complexity | Simple | More complex |
| Flexibility | Extensible via adapters | Full control |
| Message Editing | Automatic | Requires `onEdit` handler |
| Branch Switching | Automatic | Requires `setMessages` handler |
| Multi-Thread Support | Yes (with Assistant Cloud or custom adapter) | Yes (with thread list adapter) |
| Custom Thread Storage | Yes (with useRemoteThreadListRuntime) | Yes |
| Persistence | Via history adapter or Assistant Cloud | Your implementation |
| Best For | Quick prototypes, standard apps, cloud-based | Complex state requirements, custom storage needs |
## Troubleshooting
### Common Issues
**Messages not appearing**: Ensure your adapter returns the correct format:
```tsx
return {
content: [{ type: "text", text: "response" }]
};
```
**Streaming not working**: Make sure to use `async *run` (note the asterisk):
```tsx
async *run({ messages }) { // ✅ Correct
async run({ messages }) { // ❌ Wrong for streaming
```
### Tool UI Flickers or Disappears During Streaming
A common issue when implementing a streaming `ChatModelAdapter` is seeing a tool's UI appear for a moment and then disappear. This is caused by failing to accumulate the `tool_calls` correctly across multiple stream chunks. State must be stored **outside** the streaming loop to persist.
**❌ Incorrect: Forgetting Previous Tool Calls**
This implementation incorrectly re-creates the `content` array for every chunk. If a later chunk contains only text, tool calls from previous chunks are lost, causing the UI to disappear.
```tsx
// This implementation incorrectly re-creates the `content` array for every chunk.
// If a later chunk contains only text, tool calls from previous chunks are lost.
async *run({ messages, abortSignal, context }) {
const stream = await backendApi({ messages, abortSignal, context });
let text = "";
for await (const chunk of stream) {
// ❌ DON'T: This overwrites toolCalls with only the current chunk's data
const toolCalls = chunk.tool_calls || [];
const content = [{ type: "text", text }];
for (const toolCall of toolCalls) {
content.push({
type: "tool-call",
toolName: toolCall.name,
toolCallId: toolCall.id,
args: toolCall.args,
});
}
yield { content }; // This yield might not contain the tool call anymore
}
}
```
**✅ Correct: Accumulating State**
This implementation uses a `Map` outside the loop to remember all tool calls.
```tsx
// This implementation uses a Map outside the loop to remember all tool calls.
async *run({ messages, abortSignal, context }) {
const stream = await backendApi({ messages, abortSignal, context });
let text = "";
// ✅ DO: Declare state outside the loop
const toolCallsMap = new Map();
for await (const chunk of stream) {
text += chunk.content || "";
// ✅ DO: Add/update tool calls in the persistent map
for (const toolCall of chunk.tool_calls || []) {
toolCallsMap.set(toolCall.toolCallId, {
type: "tool-call",
toolName: toolCall.name,
toolCallId: toolCall.toolCallId,
args: toolCall.args,
});
}
// ✅ DO: Build content from accumulated state
const content = [
...(text ? [{ type: "text", text }] : []),
...Array.from(toolCallsMap.values()),
];
yield { content }; // Yield the complete, correct state every time
}
}
```
### Debug Tips
1. **Log adapter calls** to trace execution:
```tsx
async *run(options) {
console.log("Adapter called with:", options);
// ... rest of implementation
}
```
2. **Check network requests** in browser DevTools
3. **Verify message format** matches ThreadMessage structure
## API Reference
### `ChatModelAdapter`
The main interface for connecting your API to `LocalRuntime`.
ChatModelRunResult | AsyncGenerator",
description:
"Function that sends messages to your API and returns the response",
required: true,
},
]}
/>
### `ChatModelRunOptions`
Parameters passed to the `run` function.
### `LocalRuntimeOptions`
Configuration options for the `LocalRuntime`.
### `RemoteThreadListAdapter`
Interface for implementing custom thread list storage.
Promise",
description: "Returns list of all threads (regular and archived)",
required: true,
},
{
name: "initialize",
type: "(threadId: string) => Promise",
description: "Creates a new thread with the given ID",
required: true,
},
{
name: "rename",
type: "(remoteId: string, newTitle: string) => Promise",
description: "Updates the title of a thread",
required: true,
},
{
name: "archive",
type: "(remoteId: string) => Promise",
description: "Archives a thread",
required: true,
},
{
name: "unarchive",
type: "(remoteId: string) => Promise",
description: "Unarchives a thread",
required: true,
},
{
name: "delete",
type: "(remoteId: string) => Promise",
description: "Deletes a thread permanently",
required: true,
},
{
name: "generateTitle",
type: "(remoteId: string, messages: readonly ThreadMessage[]) => Promise",
description: "Generates a title for the thread based on the conversation",
required: true,
},
]}
/>
### Related Runtime APIs
* [AssistantRuntime API](/docs/api-reference/runtimes/assistant-runtime) - Core runtime interface and methods
* [ThreadRuntime API](/docs/api-reference/runtimes/thread-runtime) - Thread-specific operations and state management
## Related Resources
* [Runtime Layer Concepts](/docs/concepts/runtime-layer)
* [Pick a Runtime Guide](/docs/runtimes/pick-a-runtime)
* [`ExternalStoreRuntime`](/docs/runtimes/custom/external-store)
* [Examples Repository](https://github.com/assistant-ui/assistant-ui/tree/main/examples)