# Picking a Runtime
URL: /docs/runtimes/pick-a-runtime
Which runtime fits your backend? Decision guide for common setups.
Choosing the right runtime is crucial for your assistant-ui implementation. This guide helps you navigate the options based on your specific needs.
Quick Decision Tree \[#quick-decision-tree]
Core Runtimes \[#core-runtimes]
These are the foundational runtimes that power assistant-ui:
Pre-Built Integrations \[#pre-built-integrations]
For popular frameworks, we provide ready-to-use integrations built on top of our core runtimes:
Understanding Runtime Architecture \[#understanding-runtime-architecture]
How Pre-Built Integrations Work \[#how-pre-built-integrations-work]
The pre-built integrations (AI SDK, LangGraph, etc.) are **not separate runtime types**. They're convenient wrappers built on top of our core runtimes:
* **AI SDK Integration** → Built on `LocalRuntime` with streaming adapter
* **LangGraph Runtime** → Built on `LocalRuntime` with graph execution adapter
* **LangServe Runtime** → Built on `LocalRuntime` with LangServe client adapter
* **Mastra Runtime** → Built on `LocalRuntime` with workflow adapter
This means you get all the benefits of `LocalRuntime` (automatic state management, built-in features) with zero configuration for your specific framework.
When to Use Pre-Built vs Core Runtimes \[#when-to-use-pre-built-vs-core-runtimes]
**Use a pre-built integration when:**
* You're already using that framework
* You want the fastest possible setup
* The integration covers your needs
**Use a core runtime when:**
* You have a custom backend
* You need features not exposed by the integration
* You want full control over the implementation
Pre-built integrations can always be replaced with a custom `LocalRuntime` or `ExternalStoreRuntime` implementation if you need more control later.
Feature Comparison \[#feature-comparison]
Core Runtime Capabilities \[#core-runtime-capabilities]
| Feature | `LocalRuntime` | `ExternalStoreRuntime` |
| -------------------- | -------------- | ----------------------- |
| **State Management** | Automatic | You control |
| **Setup Complexity** | Simple | Moderate |
| **Message Editing** | Built-in | Implement `onEdit` |
| **Branch Switching** | Built-in | Implement `setMessages` |
| **Regeneration** | Built-in | Implement `onReload` |
| **Cancellation** | Built-in | Implement `onCancel` |
| **Multi-thread** | Via adapters | Via adapters |
Available Adapters \[#available-adapters]
| Adapter | `LocalRuntime` | `ExternalStoreRuntime` |
| ----------- | -------------- | ---------------------- |
| ChatModel | ✅ Required | ❌ N/A |
| Attachments | ✅ | ✅ |
| Speech | ✅ | ✅ |
| Feedback | ✅ | ✅ |
| History | ✅ | ❌ Use your state |
| Suggestions | ✅ | ❌ Use your state |
Common Implementation Patterns \[#common-implementation-patterns]
Vercel AI SDK with Streaming \[#vercel-ai-sdk-with-streaming]
```tsx
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
export function MyAssistant() {
const runtime = useChatRuntime({
api: "/api/chat",
});
return (
);
}
```
Custom Backend with LocalRuntime \[#custom-backend-with-localruntime]
```tsx
import { useLocalRuntime } from "@assistant-ui/react";
const runtime = useLocalRuntime({
async run({ messages, abortSignal }) {
const response = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages }),
signal: abortSignal,
});
return response.json();
},
});
```
Redux Integration with ExternalStoreRuntime \[#redux-integration-with-externalstoreruntime]
```tsx
import { useExternalStoreRuntime } from "@assistant-ui/react";
const messages = useSelector(selectMessages);
const dispatch = useDispatch();
const runtime = useExternalStoreRuntime({
messages,
onNew: async (message) => {
dispatch(addUserMessage(message));
const response = await api.chat(message);
dispatch(addAssistantMessage(response));
},
setMessages: (messages) => dispatch(setMessages(messages)),
onEdit: async (message) => dispatch(editMessage(message)),
onReload: async (parentId) => dispatch(reloadMessage(parentId)),
});
```
Examples \[#examples]
Explore our implementation examples:
* **[AI SDK v6 Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-ai-sdk-v6)** - Vercel AI SDK with `useChatRuntime`
* **[External Store Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-external-store)** - `ExternalStoreRuntime` with custom state
* **[Assistant Cloud Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-cloud)** - Multi-thread with cloud persistence
* **[LangGraph Example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-langgraph)** - Agent workflows
Common Pitfalls to Avoid \[#common-pitfalls-to-avoid]
LocalRuntime Pitfalls \[#localruntime-pitfalls]
* **Forgetting the adapter**: `LocalRuntime` requires a `ChatModelAdapter` - it won't work without one
* **Not handling errors**: Always handle API errors in your adapter's `run` function
* **Missing abort signal**: Pass `abortSignal` to your fetch calls for proper cancellation
ExternalStoreRuntime Pitfalls \[#externalstoreruntime-pitfalls]
* **Mutating state**: Always create new arrays/objects when updating messages
* **Missing handlers**: Each UI feature requires its corresponding handler (e.g., no edit button without `onEdit`)
* **Forgetting optimistic updates**: Set `isRunning` to `true` for loading states
General Pitfalls \[#general-pitfalls]
* **Wrong integration level**: Don't use `LocalRuntime` if you already have Vercel AI SDK - use the AI SDK integration instead
* **Over-engineering**: Start with pre-built integrations before building custom solutions
* **Ignoring TypeScript**: The types will guide you to the correct implementation
Next Steps \[#next-steps]
1. **Choose your runtime** based on the decision tree above
2. **Follow the specific guide**:
* [AI SDK Integration](/docs/runtimes/ai-sdk/use-chat)
* [`LocalRuntime` Guide](/docs/runtimes/custom/local)
* [`ExternalStoreRuntime` Guide](/docs/runtimes/custom/external-store)
* [LangGraph Integration](/docs/runtimes/langgraph)
3. **Start with an example** from our [examples repository](https://github.com/assistant-ui/assistant-ui/tree/main/examples)
4. **Add features progressively** using adapters
5. **Consider Assistant Cloud** for production persistence
Need help? Join our [Discord community](https://discord.gg/S9dwgCNEFs) or check the [GitHub](https://github.com/assistant-ui/assistant-ui).