# Data Stream Protocol
URL: /docs/runtimes/data-stream
Integration with data stream protocol endpoints for streaming AI responses.
The `@assistant-ui/react-data-stream` package provides integration with data stream protocol endpoints, enabling streaming AI responses with tool support and state management.
Overview \[#overview]
The data stream protocol is a standardized format for streaming AI responses that supports:
* **Streaming text responses** with real-time updates
* **Tool calling** with structured parameters and results
* **State management** for conversation context
* **Error handling** and cancellation support
* **Attachment support** for multimodal interactions
Installation \[#installation]
Basic Usage \[#basic-usage]
Set up the Runtime \[#set-up-the-runtime]
Use `useDataStreamRuntime` to connect to your data stream endpoint:
```tsx title="app/page.tsx"
"use client";
import { useDataStreamRuntime } from "@assistant-ui/react-data-stream";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { Thread } from "@/components/assistant-ui/thread";
export default function ChatPage() {
const runtime = useDataStreamRuntime({
api: "/api/chat",
});
return (
);
}
```
Create Backend Endpoint \[#create-backend-endpoint]
Your backend endpoint should accept POST requests and return data stream responses:
```typescript title="app/api/chat/route.ts"
import { createAssistantStreamResponse } from "assistant-stream";
export async function POST(request: Request) {
const { messages, tools, system, threadId } = await request.json();
return createAssistantStreamResponse(async (controller) => {
// Process the request with your AI provider
const stream = await processWithAI({
messages,
tools,
system,
});
// Stream the response
for await (const chunk of stream) {
controller.appendText(chunk.text);
}
});
}
```
The request body includes:
* `messages` - The conversation history
* `tools` - Available tool definitions
* `system` - System prompt (if configured)
* `threadId` - The current thread/conversation identifier
Advanced Configuration \[#advanced-configuration]
Custom Headers and Authentication \[#custom-headers-and-authentication]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: {
"Authorization": "Bearer " + token,
"X-Custom-Header": "value",
},
credentials: "include",
});
```
Dynamic Headers \[#dynamic-headers]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: async () => {
const token = await getAuthToken();
return {
"Authorization": "Bearer " + token,
};
},
});
```
Dynamic Body \[#dynamic-body]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
headers: async () => ({
"Authorization": `Bearer ${await getAuthToken()}`,
}),
body: async () => ({
requestId: crypto.randomUUID(),
timestamp: Date.now(),
signature: await computeSignature(),
}),
});
```
Event Callbacks \[#event-callbacks]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
onResponse: (response) => {
console.log("Response received:", response.status);
},
onFinish: (message) => {
console.log("Message completed:", message);
},
onError: (error) => {
console.error("Error occurred:", error);
},
onCancel: () => {
console.log("Request cancelled");
},
});
```
Tool Integration \[#tool-integration]
Human-in-the-loop tools (using `human()` for tool interrupts) are not supported
in the data stream runtime. If you need human approval workflows or interactive
tool UIs, consider using [LocalRuntime](/docs/runtimes/custom/local) or
[Assistant Cloud](/docs/cloud/overview) instead.
Frontend Tools \[#frontend-tools]
Use the `frontendTools` helper to serialize client-side tools:
```tsx
import { frontendTools } from "@assistant-ui/react-data-stream";
import { makeAssistantTool } from "@assistant-ui/react";
const weatherTool = makeAssistantTool({
toolName: "get_weather",
description: "Get current weather",
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
const weather = await fetchWeather(location);
return `Weather in ${location}: ${weather}`;
},
});
const runtime = useDataStreamRuntime({
api: "/api/chat",
body: {
tools: frontendTools({
get_weather: weatherTool,
}),
},
});
```
Backend Tool Processing \[#backend-tool-processing]
Your backend should handle tool calls and return results:
```typescript title="Backend tool handling"
// Tools are automatically forwarded to your endpoint
const { tools } = await request.json();
// Process tools with your AI provider
const response = await ai.generateText({
messages,
tools,
// Tool results are streamed back automatically
});
```
Assistant Cloud Integration \[#assistant-cloud-integration]
For Assistant Cloud deployments, use `useCloudRuntime`:
```tsx
import { useCloudRuntime } from "@assistant-ui/react-data-stream";
const runtime = useCloudRuntime({
cloud: assistantCloud,
assistantId: "my-assistant-id",
});
```
The `useCloudRuntime` hook is currently under active development and not yet ready for production use.
Message Conversion \[#message-conversion]
Framework-Agnostic Conversion (Recommended) \[#framework-agnostic-conversion-recommended]
For custom integrations, use the framework-agnostic utilities from `assistant-stream`:
```tsx
import { toGenericMessages, toToolsJSONSchema } from "assistant-stream";
// Convert messages to a generic format
const genericMessages = toGenericMessages(messages);
// Convert tools to JSON Schema format
const toolSchemas = toToolsJSONSchema(tools);
```
The `GenericMessage` format can be easily converted to any LLM provider format:
```tsx
import type { GenericMessage } from "assistant-stream";
// GenericMessage is a union of:
// - { role: "system"; content: string }
// - { role: "user"; content: (GenericTextPart | GenericFilePart)[] }
// - { role: "assistant"; content: (GenericTextPart | GenericToolCallPart)[] }
// - { role: "tool"; content: GenericToolResultPart[] }
```
AI SDK Specific Conversion \[#ai-sdk-specific-conversion]
For AI SDK integration, use `toLanguageModelMessages`:
```tsx
import { toLanguageModelMessages } from "@assistant-ui/react-data-stream";
// Convert to AI SDK LanguageModelV2Message format
const languageModelMessages = toLanguageModelMessages(messages, {
unstable_includeId: true, // Include message IDs
});
```
`toLanguageModelMessages` internally uses `toGenericMessages` and adds AI SDK-specific transformations.
For new custom integrations, prefer using `toGenericMessages` directly.
Error Handling \[#error-handling]
The runtime automatically handles common error scenarios:
* **Network errors**: Automatically retried with exponential backoff
* **Stream interruptions**: Gracefully handled with partial content preservation
* **Tool execution errors**: Displayed in the UI with error states
* **Cancellation**: Clean abort signal handling
Best Practices \[#best-practices]
Performance Optimization \[#performance-optimization]
```tsx
// Use React.memo for expensive components
const OptimizedThread = React.memo(Thread);
// Memoize runtime configuration
const runtimeConfig = useMemo(() => ({
api: "/api/chat",
headers: { "Authorization": `Bearer ${token}` },
}), [token]);
const runtime = useDataStreamRuntime(runtimeConfig);
```
Error Boundaries \[#error-boundaries]
```tsx
import { ErrorBoundary } from "react-error-boundary";
function ChatErrorFallback({ error, resetErrorBoundary }) {
return (
Something went wrong:
{error.message}
);
}
export default function App() {
return (
);
}
```
State Persistence \[#state-persistence]
```tsx
const runtime = useDataStreamRuntime({
api: "/api/chat",
body: {
// Include conversation state
state: conversationState,
},
onFinish: (message) => {
// Save state after each message
saveConversationState(message.metadata.unstable_state);
},
});
```
Examples \[#examples]
Explore our [examples repository](https://github.com/assistant-ui/assistant-ui/tree/main/examples) for implementation references.
API Reference \[#api-reference]
For detailed API documentation, see the [`@assistant-ui/react-data-stream` API Reference](/docs/api-reference/integrations/react-data-stream).