Data Stream Protocol

Integration with data stream protocol endpoints for streaming AI responses.

The @assistant-ui/react-data-stream package provides integration with data stream protocol endpoints, enabling streaming AI responses with tool support and state management.

Overview

The data stream protocol is a standardized format for streaming AI responses that supports:

  • Streaming text responses with real-time updates
  • Tool calling with structured parameters and results
  • State management for conversation context
  • Error handling and cancellation support
  • Attachment support for multimodal interactions

Installation

npm install @assistant-ui/react-data-stream

Basic Usage

Set up the Runtime

Use useDataStreamRuntime to connect to your data stream endpoint:

app/page.tsx
"use client";
import { useDataStreamRuntime } from "@assistant-ui/react-data-stream";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { Thread } from "@/components/assistant-ui/thread";

export default function ChatPage() {
  const runtime = useDataStreamRuntime({
    api: "/api/chat",
  });

  return (
    <AssistantRuntimeProvider runtime={runtime}>
      <Thread />
    </AssistantRuntimeProvider>
  );
}

Create Backend Endpoint

Your backend endpoint should accept POST requests and return data stream responses:

app/api/chat/route.ts
import { createAssistantStreamResponse } from "assistant-stream";

export async function POST(request: Request) {
  const { messages, tools, system, threadId } = await request.json();

  return createAssistantStreamResponse(async (controller) => {
    // Process the request with your AI provider
    const stream = await processWithAI({
      messages,
      tools,
      system,
    });

    // Stream the response
    for await (const chunk of stream) {
      controller.appendText(chunk.text);
    }
  });
}

The request body includes:

  • messages - The conversation history
  • tools - Available tool definitions
  • system - System prompt (if configured)
  • threadId - The current thread/conversation identifier

Advanced Configuration

Custom Headers and Authentication

const runtime = useDataStreamRuntime({
  api: "/api/chat",
  headers: {
    "Authorization": "Bearer " + token,
    "X-Custom-Header": "value",
  },
  credentials: "include",
});

Dynamic Headers

const runtime = useDataStreamRuntime({
  api: "/api/chat",
  headers: async () => {
    const token = await getAuthToken();
    return {
      "Authorization": "Bearer " + token,
    };
  },
});

Dynamic Body

const runtime = useDataStreamRuntime({
  api: "/api/chat",
  headers: async () => ({
    "Authorization": `Bearer ${await getAuthToken()}`,
  }),
  body: async () => ({
    requestId: crypto.randomUUID(),
    timestamp: Date.now(),
    signature: await computeSignature(),
  }),
});

Event Callbacks

const runtime = useDataStreamRuntime({
  api: "/api/chat",
  onResponse: (response) => {
    console.log("Response received:", response.status);
  },
  onFinish: (message) => {
    console.log("Message completed:", message);
  },
  onError: (error) => {
    console.error("Error occurred:", error);
  },
  onCancel: () => {
    console.log("Request cancelled");
  },
});

Tool Integration

Human-in-the-loop tools (using human() for tool interrupts) are not supported in the data stream runtime. If you need human approval workflows or interactive tool UIs, consider using LocalRuntime or Assistant Cloud instead.

Frontend Tools

Use the frontendTools helper to serialize client-side tools:

import { frontendTools } from "@assistant-ui/react-data-stream";
import { makeAssistantTool } from "@assistant-ui/react";

const weatherTool = makeAssistantTool({
  toolName: "get_weather",
  description: "Get current weather",
  parameters: z.object({
    location: z.string(),
  }),
  execute: async ({ location }) => {
    const weather = await fetchWeather(location);
    return `Weather in ${location}: ${weather}`;
  },
});

const runtime = useDataStreamRuntime({
  api: "/api/chat",
  body: {
    tools: frontendTools({
      get_weather: weatherTool,
    }),
  },
});

Backend Tool Processing

Your backend should handle tool calls and return results:

Backend tool handling
// Tools are automatically forwarded to your endpoint
const { tools } = await request.json();

// Process tools with your AI provider
const response = await ai.generateText({
  messages,
  tools,
  // Tool results are streamed back automatically
});

Assistant Cloud Integration

For Assistant Cloud deployments, use useCloudRuntime:

import { useCloudRuntime } from "@assistant-ui/react-data-stream";

const runtime = useCloudRuntime({
  cloud: assistantCloud,
  assistantId: "my-assistant-id",
});

The useCloudRuntime hook is currently under active development and not yet ready for production use.

Message Conversion

For custom integrations, use the framework-agnostic utilities from assistant-stream:

import { toGenericMessages, toToolsJSONSchema } from "assistant-stream";

// Convert messages to a generic format
const genericMessages = toGenericMessages(messages);

// Convert tools to JSON Schema format
const toolSchemas = toToolsJSONSchema(tools);

The GenericMessage format can be easily converted to any LLM provider format:

import type { GenericMessage } from "assistant-stream";

// GenericMessage is a union of:
// - { role: "system"; content: string }
// - { role: "user"; content: (GenericTextPart | GenericFilePart)[] }
// - { role: "assistant"; content: (GenericTextPart | GenericToolCallPart)[] }
// - { role: "tool"; content: GenericToolResultPart[] }

AI SDK Specific Conversion

For AI SDK integration, use toLanguageModelMessages:

import { toLanguageModelMessages } from "@assistant-ui/react-data-stream";

// Convert to AI SDK LanguageModelV2Message format
const languageModelMessages = toLanguageModelMessages(messages, {
  unstable_includeId: true, // Include message IDs
});

toLanguageModelMessages internally uses toGenericMessages and adds AI SDK-specific transformations. For new custom integrations, prefer using toGenericMessages directly.

Error Handling

The runtime automatically handles common error scenarios:

  • Network errors: Automatically retried with exponential backoff
  • Stream interruptions: Gracefully handled with partial content preservation
  • Tool execution errors: Displayed in the UI with error states
  • Cancellation: Clean abort signal handling

Best Practices

Performance Optimization

// Use React.memo for expensive components
const OptimizedThread = React.memo(Thread);

// Memoize runtime configuration
const runtimeConfig = useMemo(() => ({
  api: "/api/chat",
  headers: { "Authorization": `Bearer ${token}` },
}), [token]);

const runtime = useDataStreamRuntime(runtimeConfig);

Error Boundaries

import { ErrorBoundary } from "react-error-boundary";

function ChatErrorFallback({ error, resetErrorBoundary }) {
  return (
    <div role="alert">
      <h2>Something went wrong:</h2>
      <pre>{error.message}</pre>
      <button onClick={resetErrorBoundary}>Try again</button>
    </div>
  );
}

export default function App() {
  return (
    <ErrorBoundary FallbackComponent={ChatErrorFallback}>
      <AssistantRuntimeProvider runtime={runtime}>
        <Thread />
      </AssistantRuntimeProvider>
    </ErrorBoundary>
  );
}

State Persistence

const runtime = useDataStreamRuntime({
  api: "/api/chat",
  body: {
    // Include conversation state
    state: conversationState,
  },
  onFinish: (message) => {
    // Save state after each message
    saveConversationState(message.metadata.unstable_state);
  },
});

Examples

Explore our examples repository for implementation references.

API Reference

For detailed API documentation, see the @assistant-ui/react-data-stream API Reference.