# AI SDK v6 URL: /docs/runtimes/ai-sdk/v6 Integrate Vercel AI SDK v6 with assistant-ui for streaming chat. Overview \[#overview] Integration with the Vercel AI SDK v6 using the `useChatRuntime` hook from `@assistant-ui/react-ai-sdk`. Getting Started \[#getting-started] Create a Next.js project \[#create-a-nextjs-project] ```sh npx create-next-app@latest my-app cd my-app ``` Install dependencies \[#install-dependencies] Setup a backend route under `/api/chat` \[#setup-a-backend-route-under-apichat] `@/app/api/chat/route.ts` ```tsx import { openai } from "@ai-sdk/openai"; import { streamText, convertToModelMessages, tool, zodSchema, } from "ai"; import type { UIMessage } from "ai"; import { z } from "zod"; export const maxDuration = 30; export async function POST(req: Request) { const { messages }: { messages: UIMessage[] } = await req.json(); const result = streamText({ model: openai("gpt-4o"), messages: await convertToModelMessages(messages), // Note: async in v6 tools: { get_current_weather: tool({ description: "Get the current weather", inputSchema: zodSchema( z.object({ city: z.string(), }), ), execute: async ({ city }) => { return `The weather in ${city} is sunny`; }, }), }, }); return result.toUIMessageStreamResponse(); } ``` Setup the frontend \[#setup-the-frontend] `@/app/page.tsx` ```tsx "use client"; import { Thread } from "@/components/assistant-ui/thread"; import { AssistantRuntimeProvider } from "@assistant-ui/react"; import { useChatRuntime } from "@assistant-ui/react-ai-sdk"; export default function Home() { const runtime = useChatRuntime(); return (
); } ```
Tracking Token Usage \[#tracking-token-usage] assistant-ui exports a `useThreadTokenUsage` hook to access thread-level token usage on the client. Use `messageMetadata` in your Next.js route to attach `usage` from `finish` and `modelId` from `finish-step`. ```tsx import { streamText, convertToModelMessages } from "ai"; import { frontendTools } from "@assistant-ui/react-ai-sdk"; export async function POST(req: Request) { const { messages, tools, config } = await req.json(); const result = streamText({ model: getModel(config?.modelName), messages: await convertToModelMessages(messages), tools: frontendTools(tools), }); return result.toUIMessageStreamResponse({ messageMetadata: ({ part }) => { if (part.type === "finish") { return { usage: part.totalUsage, }; } if (part.type === "finish-step") { return { modelId: part.response.modelId, }; } return undefined; }, }); } ``` Use `useThreadTokenUsage` to render token usage on the client. ```tsx "use client"; import { useThreadTokenUsage } from "@assistant-ui/react-ai-sdk"; export function TokenCounter() { const usage = useThreadTokenUsage(); if (!usage) return null; return
{usage.totalTokens} total tokens
; } ```
Persisting Chat History \[#persisting-chat-history] By default, messages live only in memory and reset on reload. To persist and restore history per thread, provide a `ThreadHistoryAdapter` via `adapters.history`. The adapter **must** implement `withFormat`. `useChatRuntime` persists history through `withFormat(fmt)` so messages round-trip as AI SDK `UIMessage` objects. An adapter without `withFormat` throws at runtime — `load` / `append` on the top level are unused in the AI SDK path. For server-side cloud persistence with zero adapter code, see the [AssistantCloud integration](/docs/cloud/ai-sdk-assistant-ui). Example \[#example] ```tsx "use client"; import { useChatRuntime } from "@assistant-ui/react-ai-sdk"; import type { ThreadHistoryAdapter } from "@assistant-ui/react"; const historyAdapter: ThreadHistoryAdapter = { // Required by the type — unused by useChatRuntime. async load() { return { headId: null, messages: [] }; }, async append() {}, // `fmt` encodes UIMessage ↔ storage rows (ai-sdk/v6 format). withFormat: (fmt) => ({ async load() { const rows = await fetch("/api/history").then((r) => r.json()); return { messages: rows.map(fmt.decode) }; }, async append(item) { await fetch("/api/history", { method: "POST", body: JSON.stringify({ id: fmt.getId(item.message), parent_id: item.parentId, format: fmt.format, content: fmt.encode(item), }), }); }, }), }; function Chat() { const runtime = useChatRuntime({ adapters: { history: historyAdapter }, }); // ... } ``` Each persisted row follows `{ id, parent_id, format, content }`; `fmt.encode` produces the `content` payload and `fmt.decode` reverses it, so your backend never needs to know about `UIMessage` internals. Key Changes from v5 \[#key-changes-from-v5] | Feature | v5 | v6 | | -------------------------- | ----------------------------- | ----------------------------------------- | | **ai package** | `ai@^5` | `ai@^6` | | **@ai-sdk/react** | `@ai-sdk/react@^2` | `@ai-sdk/react@^3` | | **convertToModelMessages** | Sync | Async (`await`) | | **Tool schema** | `parameters: z.object({...})` | `inputSchema: zodSchema(z.object({...}))` | API Reference \[#api-reference] useChatRuntime \[#usechatruntime] Creates a runtime integrated with AI SDK's `useChat` hook. ```tsx import { useChatRuntime } from "@assistant-ui/react-ai-sdk"; const runtime = useChatRuntime(); ``` Custom API URL \[#custom-api-url] To use a different endpoint, pass a custom `AssistantChatTransport`: ```tsx import { useChatRuntime, AssistantChatTransport } from "@assistant-ui/react-ai-sdk"; const runtime = useChatRuntime({ transport: new AssistantChatTransport({ api: "/my-custom-api/chat", }), }); ``` System Messages and Frontend Tools \[#system-messages-and-frontend-tools] `AssistantChatTransport` (used by default) automatically forwards system messages and frontend tools to your backend. To consume them, update your backend route: Backend route with system/tools forwarding: ```tsx import { openai } from "@ai-sdk/openai"; import { streamText, convertToModelMessages, zodSchema } from "ai"; import type { UIMessage } from "ai"; import { frontendTools } from "@assistant-ui/react-ai-sdk"; export async function POST(req: Request) { const { messages, system, tools, }: { messages: UIMessage[]; system?: string; tools?: any; } = await req.json(); const result = streamText({ model: openai("gpt-4o"), system, messages: await convertToModelMessages(messages), tools: { ...frontendTools(tools), // your backend tools... }, }); return result.toUIMessageStreamResponse(); } ``` useAISDKRuntime (Advanced) \[#useaisdkruntime-advanced] For advanced use cases where you need direct access to the `useChat` hook: ```tsx import { useChat } from "@ai-sdk/react"; import { useAISDKRuntime } from "@assistant-ui/react-ai-sdk"; const chat = useChat(); const runtime = useAISDKRuntime(chat); ``` Example \[#example-1] For a complete example, check out the [AI SDK v6 example](https://github.com/assistant-ui/assistant-ui/tree/main/examples/with-ai-sdk-v6) in our repository.