logoassistant-ui

AI SDK + Chat Persistence

Chat persistence with AI SDK

Hello there!

How can I help you today?

Overview

This example demonstrates integrating assistant-ui with the Vercel AI SDK for building production-ready chat interfaces. It showcases a complete chat application with thread management, message persistence, and a collapsible sidebar for conversation history.

Features

  • AI SDK Integration: Seamless connection with Vercel's ai package
  • Thread Management: Create, switch, and delete conversation threads
  • Collapsible Sidebar: Toggle sidebar visibility for focused chat
  • Model Picker: Switch between different AI models
  • Responsive Design: Mobile-friendly with sheet-based navigation
  • Real-time Streaming: Live message streaming with loading states

Quick Start

npm install @assistant-ui/react @assistant-ui/react-ai-sdk ai @ai-sdk/openai

Code

Client Component

"use client";

import { useChat } from "@ai-sdk/react";
import { useVercelAIRuntime } from "@assistant-ui/react-ai-sdk";
import { AssistantRuntimeProvider, Thread } from "@assistant-ui/react";

export default function Chat() {
  const chat = useChat({
    api: "/api/chat",
  });

  const runtime = useVercelAIRuntime(chat);

  return (
    <AssistantRuntimeProvider runtime={runtime}>
      <div className="flex h-full">
        <Sidebar />
        <main className="flex-1">
          <Thread />
        </main>
      </div>
    </AssistantRuntimeProvider>
  );
}

API Route

// app/api/chat/route.ts
import { openai } from "@ai-sdk/openai";
import { streamText } from "ai";

export async function POST(req: Request) {
  const { messages } = await req.json();

  const result = streamText({
    model: openai("gpt-4o"),
    messages,
    // Optional: Add system prompt
    system: "You are a helpful assistant.",
  });

  return result.toDataStreamResponse();
}

Key Integration Points

Hook/FunctionPurpose
useChatAI SDK hook for chat state management
useVercelAIRuntimeAdapter connecting AI SDK to assistant-ui
streamTextServer-side streaming response generation
toDataStreamResponseConverts stream to Response object

Adding Persistence

To persist conversations, add a database and modify the API route:

// Save messages to database
await db.messages.create({
  threadId,
  role: message.role,
  content: message.content,
});

// Load messages on page load
const savedMessages = await db.messages.findMany({ threadId });

Source

View full source on GitHub