logoassistant-ui

Picking a Runtime

Choosing the right runtime is crucial for your assistant-ui implementation. This guide helps you navigate the options based on your specific needs.

Quick Decision Tree

Core Runtimes

These are the foundational runtimes that power assistant-ui:

Pre-Built Integrations

For popular frameworks, we provide ready-to-use integrations built on top of our core runtimes:

Understanding Runtime Architecture

How Pre-Built Integrations Work

The pre-built integrations (AI SDK, LangGraph, etc.) are not separate runtime types. They're convenient wrappers built on top of our core runtimes:

  • AI SDK Integration → Built on LocalRuntime with streaming adapter
  • LangGraph Runtime → Built on LocalRuntime with graph execution adapter
  • LangServe Runtime → Built on LocalRuntime with LangServe client adapter
  • Mastra Runtime → Built on LocalRuntime with workflow adapter

This means you get all the benefits of LocalRuntime (automatic state management, built-in features) with zero configuration for your specific framework.

When to Use Pre-Built vs Core Runtimes

Use a pre-built integration when:

  • You're already using that framework
  • You want the fastest possible setup
  • The integration covers your needs

Use a core runtime when:

  • You have a custom backend
  • You need features not exposed by the integration
  • You want full control over the implementation

Pre-built integrations can always be replaced with a custom LocalRuntime or ExternalStoreRuntime implementation if you need more control later.

Feature Comparison

Core Runtime Capabilities

FeatureLocalRuntimeExternalStoreRuntime
State ManagementAutomaticYou control
Setup ComplexitySimpleModerate
Message EditingBuilt-inImplement onEdit
Branch SwitchingBuilt-inImplement setMessages
RegenerationBuilt-inImplement onReload
CancellationBuilt-inImplement onCancel
Multi-threadVia adaptersVia adapters

Available Adapters

AdapterLocalRuntimeExternalStoreRuntime
ChatModel✅ Required❌ N/A
Attachments
Speech
Feedback
History❌ Use your state
Suggestions❌ Use your state

Common Implementation Patterns

Vercel AI SDK with Streaming

import { useChatRuntime } from "@assistant-ui/react-ai-sdk";

export function MyAssistant() {
  const runtime = useChatRuntime({
    api: "/api/chat",
  });

  return (
    <AssistantRuntimeProvider runtime={runtime}>
      <Thread />
    </AssistantRuntimeProvider>
  );
}

Custom Backend with LocalRuntime

import { useLocalRuntime } from "@assistant-ui/react";

const runtime = useLocalRuntime({
  async run({ messages, abortSignal }) {
    const response = await fetch("/api/chat", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ messages }),
      signal: abortSignal,
    });
    return response.json();
  },
});

Redux Integration with ExternalStoreRuntime

import { useExternalStoreRuntime } from "@assistant-ui/react";

const messages = useSelector(selectMessages);
const dispatch = useDispatch();

const runtime = useExternalStoreRuntime({
  messages,
  onNew: async (message) => {
    dispatch(addUserMessage(message));
    const response = await api.chat(message);
    dispatch(addAssistantMessage(response));
  },
  setMessages: (messages) => dispatch(setMessages(messages)),
  onEdit: async (message) => dispatch(editMessage(message)),
  onReload: async (parentId) => dispatch(reloadMessage(parentId)),
});

Examples

Explore our implementation examples:

Common Pitfalls to Avoid

LocalRuntime Pitfalls

  • Forgetting the adapter: LocalRuntime requires a ChatModelAdapter - it won't work without one
  • Not handling errors: Always handle API errors in your adapter's run function
  • Missing abort signal: Pass abortSignal to your fetch calls for proper cancellation

ExternalStoreRuntime Pitfalls

  • Mutating state: Always create new arrays/objects when updating messages
  • Missing handlers: Each UI feature requires its corresponding handler (e.g., no edit button without onEdit)
  • Forgetting optimistic updates: Set isRunning to true for loading states

General Pitfalls

  • Wrong integration level: Don't use LocalRuntime if you already have Vercel AI SDK - use the AI SDK integration instead
  • Over-engineering: Start with pre-built integrations before building custom solutions
  • Ignoring TypeScript: The types will guide you to the correct implementation

Next Steps

  1. Choose your runtime based on the decision tree above
  2. Follow the specific guide:
  3. Start with an example from our examples repository
  4. Add features progressively using adapters
  5. Consider Assistant Cloud for production persistence

Need help? Join our Discord community or check the GitHub.