Picking a Runtime
Choosing the right runtime is crucial for your assistant-ui implementation. This guide helps you navigate the options based on your specific needs.
Quick Decision Tree
Core Runtimes
These are the foundational runtimes that power assistant-ui:
`LocalRuntime`
assistant-ui manages chat state internally. Simple adapter pattern for any backend.
`ExternalStoreRuntime`
You control the state. Perfect for Redux, Zustand, or existing state management.
Pre-Built Integrations
For popular frameworks, we provide ready-to-use integrations built on top of our core runtimes:
Vercel AI SDK
For useChat and useAssistant hooks - streaming with all major providers
Data Stream Protocol
For custom backends using the data stream protocol standard
LangGraph
For complex agent workflows with LangChain's graph framework
LangServe
For LangChain applications deployed with LangServe
Mastra
For workflow orchestration with Mastra's ecosystem
Understanding Runtime Architecture
How Pre-Built Integrations Work
The pre-built integrations (AI SDK, LangGraph, etc.) are not separate runtime types. They're convenient wrappers built on top of our core runtimes:
- AI SDK Integration → Built on
LocalRuntimewith streaming adapter - LangGraph Runtime → Built on
LocalRuntimewith graph execution adapter - LangServe Runtime → Built on
LocalRuntimewith LangServe client adapter - Mastra Runtime → Built on
LocalRuntimewith workflow adapter
This means you get all the benefits of LocalRuntime (automatic state management, built-in features) with zero configuration for your specific framework.
When to Use Pre-Built vs Core Runtimes
Use a pre-built integration when:
- You're already using that framework
- You want the fastest possible setup
- The integration covers your needs
Use a core runtime when:
- You have a custom backend
- You need features not exposed by the integration
- You want full control over the implementation
Pre-built integrations can always be replaced with a custom LocalRuntime or ExternalStoreRuntime implementation if you need more control later.
Feature Comparison
Core Runtime Capabilities
| Feature | LocalRuntime | ExternalStoreRuntime |
|---|---|---|
| State Management | Automatic | You control |
| Setup Complexity | Simple | Moderate |
| Message Editing | Built-in | Implement onEdit |
| Branch Switching | Built-in | Implement setMessages |
| Regeneration | Built-in | Implement onReload |
| Cancellation | Built-in | Implement onCancel |
| Multi-thread | Via adapters | Via adapters |
Available Adapters
| Adapter | LocalRuntime | ExternalStoreRuntime |
|---|---|---|
| ChatModel | ✅ Required | ❌ N/A |
| Attachments | ✅ | ✅ |
| Speech | ✅ | ✅ |
| Feedback | ✅ | ✅ |
| History | ✅ | ❌ Use your state |
| Suggestions | ✅ | ❌ Use your state |
Common Implementation Patterns
Vercel AI SDK with Streaming
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
export function MyAssistant() {
const runtime = useChatRuntime({
api: "/api/chat",
});
return (
<AssistantRuntimeProvider runtime={runtime}>
<Thread />
</AssistantRuntimeProvider>
);
}Custom Backend with LocalRuntime
import { useLocalRuntime } from "@assistant-ui/react";
const runtime = useLocalRuntime({
async run({ messages, abortSignal }) {
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages }),
signal: abortSignal,
});
return response.json();
},
});Redux Integration with ExternalStoreRuntime
import { useExternalStoreRuntime } from "@assistant-ui/react";
const messages = useSelector(selectMessages);
const dispatch = useDispatch();
const runtime = useExternalStoreRuntime({
messages,
onNew: async (message) => {
dispatch(addUserMessage(message));
const response = await api.chat(message);
dispatch(addAssistantMessage(response));
},
setMessages: (messages) => dispatch(setMessages(messages)),
onEdit: async (message) => dispatch(editMessage(message)),
onReload: async (parentId) => dispatch(reloadMessage(parentId)),
});Examples
Explore our implementation examples:
- AI SDK Example - Vercel AI SDK with
useChatRuntime - External Store Example -
ExternalStoreRuntimewith custom state - Assistant Cloud Example - Multi-thread with cloud persistence
- LangGraph Example - Agent workflows
- OpenAI Assistants Example - OpenAI Assistants API
Common Pitfalls to Avoid
LocalRuntime Pitfalls
- Forgetting the adapter:
LocalRuntimerequires aChatModelAdapter- it won't work without one - Not handling errors: Always handle API errors in your adapter's
runfunction - Missing abort signal: Pass
abortSignalto your fetch calls for proper cancellation
ExternalStoreRuntime Pitfalls
- Mutating state: Always create new arrays/objects when updating messages
- Missing handlers: Each UI feature requires its corresponding handler (e.g., no edit button without
onEdit) - Forgetting optimistic updates: Set
isRunningtotruefor loading states
General Pitfalls
- Wrong integration level: Don't use
LocalRuntimeif you already have Vercel AI SDK - use the AI SDK integration instead - Over-engineering: Start with pre-built integrations before building custom solutions
- Ignoring TypeScript: The types will guide you to the correct implementation
Next Steps
- Choose your runtime based on the decision tree above
- Follow the specific guide:
- Start with an example from our examples repository
- Add features progressively using adapters
- Consider Assistant Cloud for production persistence
Need help? Join our Discord community or check the GitHub.