The canonical first-party framework integration for assistant-ui.
Vercel AI SDK is the most common framework people pair with assistant-ui. The full setup, attachments, persistence, tool-call patterns, and version notes are documented under runtimes/ai-sdk; this page is the entry point in the integrations tree for discoverability and architecture context.
If you arrived here looking to wire up your first chat: jump to AI SDK v6 quickstart. This page is a high-level pointer.
Where it slots in
browser ──► useChatRuntime (react-ai-sdk) ──► /api/chat (streamText) ──► provider
│
└─ thread state, tool calls, attachments,
speech / dictation / feedback adapters@assistant-ui/react-ai-sdk wraps the AI SDK's useChat hook and exposes it as an assistant-ui runtime. The runtime owns conversation state on the client; your /api/chat route returns a UI message stream from streamText. Everything else (tools, attachments, observability, gateways, custom persistence) layers on top of this base.
Pick a version
Three versions of ai are supported. New projects should pick v6; v5 and v4 are documented for migration and existing apps that haven't upgraded.
When to pick AI SDK
AI SDK is the default choice for new projects on Next.js, Remix, or any framework with a Node-compatible API route. Pick it when:
- You want a single direct path from the chat UI to your model with the smallest possible code surface.
- You will compose with a framework like Mastra, an observability tool, an LLM gateway, or tools through MCP, all of which assume an AI SDK route.
- You want first-party
frontendTools, attachments, multi-step tool calls, token-usage metadata, and persisted history viawithFormat.
If you need streaming agent state (subgraph events, generative UI messages), look at LangGraph instead. If you have a different protocol-shaped backend (A2A, AG-UI, OpenCode), see pick a runtime.