Attachments
Enable users to attach files to their messages, enhancing conversations with images, documents, and other content.
Overview
The attachment system in assistant-ui provides a flexible framework for handling file uploads in your AI chat interface. It consists of:
- Attachment Adapters: Backend logic for processing attachment files
- UI Components: Pre-built components for attachment display and interaction
- Runtime Integration: Seamless integration with all assistant-ui runtimes
Getting Started
Install UI Components
First, add the attachment UI components to your project:
npx shadcn@latest add "https://r.assistant-ui.com/attachment"
This adds /components/assistant-ui/attachment.tsx
to your project.
Next steps: Feel free to adjust these auto-generated components (styling, layout, behavior) to match your application's design system.
Configure Attachment Adapter
Set up an attachment adapter in your runtime provider:
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
import {
CompositeAttachmentAdapter,
SimpleImageAttachmentAdapter,
SimpleTextAttachmentAdapter,
} from "@assistant-ui/react";
const runtime = useChatRuntime({
api: "/api/chat",
adapters: {
attachments: new CompositeAttachmentAdapter([
new SimpleImageAttachmentAdapter(),
new SimpleTextAttachmentAdapter(),
]),
},
});
Add UI Components
Integrate attachment components into your chat interface:
// In your Composer component
import {
ComposerAttachments,
ComposerAddAttachment,
} from "@/components/assistant-ui/attachment";
const Composer = () => {
return (
<ComposerPrimitive.Root>
<ComposerAttachments />
<ComposerAddAttachment />
<ComposerPrimitive.Input placeholder="Type a message..." />
</ComposerPrimitive.Root>
);
};
// In your UserMessage component
import { UserMessageAttachments } from "@/components/assistant-ui/attachment";
const UserMessage = () => {
return (
<MessagePrimitive.Root>
<UserMessageAttachments />
<MessagePrimitive.Content />
</MessagePrimitive.Root>
);
};
Built-in Attachment Adapters
SimpleImageAttachmentAdapter
Handles image files and converts them to data URLs for display in the chat UI. By default, images are shown inline but not sent to the LLM - use the VisionImageAdapter example above to send images to vision-capable models.
const imageAdapter = new SimpleImageAttachmentAdapter();
// Accepts: image/* (JPEG, PNG, GIF, etc.)
// Output: { type: "image", url: "data:image/..." }
SimpleTextAttachmentAdapter
Processes text files and wraps content in formatted tags:
const textAdapter = new SimpleTextAttachmentAdapter();
// Accepts: text/plain, text/html, text/markdown, etc.
// Output: Content wrapped in <attachment>...</attachment> tags
CompositeAttachmentAdapter
Combines multiple adapters to support various file types:
const compositeAdapter = new CompositeAttachmentAdapter([
new SimpleImageAttachmentAdapter(),
new SimpleTextAttachmentAdapter(),
// Add more adapters as needed
]);
Creating Custom Attachment Adapters
Build your own adapters for specialized file handling. Below are complete examples for common use cases.
Vision-Capable Image Adapter
Send images to vision-capable LLMs like GPT-4V, Claude 3, or Gemini Pro Vision:
import {
AttachmentAdapter,
PendingAttachment,
CompleteAttachment,
} from "@assistant-ui/react";
class VisionImageAdapter implements AttachmentAdapter {
accept = "image/jpeg,image/png,image/webp,image/gif";
async add({ file }: { file: File }): Promise<PendingAttachment> {
// Validate file size (e.g., 20MB limit for most LLMs)
const maxSize = 20 * 1024 * 1024; // 20MB
if (file.size > maxSize) {
throw new Error("Image size exceeds 20MB limit");
}
// Return pending attachment while processing
return {
id: crypto.randomUUID(),
type: "image",
name: file.name,
file,
status: { type: "running" },
};
}
async send(attachment: PendingAttachment): Promise<CompleteAttachment> {
// Convert image to base64 data URL
const base64 = await this.fileToBase64DataURL(attachment.file);
// Return in assistant-ui format with image content
return {
id: attachment.id,
type: "image",
name: attachment.name,
content: [
{
type: "image",
image: base64, // data:image/jpeg;base64,... format
},
],
status: { type: "complete" },
};
}
async remove(attachment: PendingAttachment): Promise<void> {
// Cleanup if needed (e.g., revoke object URLs if you created any)
}
private async fileToBase64DataURL(file: File): Promise<string> {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => {
// FileReader result is already a data URL
resolve(reader.result as string);
};
reader.onerror = reject;
reader.readAsDataURL(file);
});
}
}
PDF Document Adapter
Handle PDF files by extracting text or converting to base64 for processing:
import {
AttachmentAdapter,
PendingAttachment,
CompleteAttachment,
} from "@assistant-ui/react";
class PDFAttachmentAdapter implements AttachmentAdapter {
accept = "application/pdf";
async add({ file }: { file: File }): Promise<PendingAttachment> {
// Validate file size
const maxSize = 10 * 1024 * 1024; // 10MB limit
if (file.size > maxSize) {
throw new Error("PDF size exceeds 10MB limit");
}
return {
id: crypto.randomUUID(),
type: "document",
name: file.name,
file,
status: { type: "running" },
};
}
async send(attachment: PendingAttachment): Promise<CompleteAttachment> {
// Option 1: Extract text from PDF (requires pdf parsing library)
// const text = await this.extractTextFromPDF(attachment.file);
// Option 2: Convert to base64 for API processing
const base64Data = await this.fileToBase64(attachment.file);
return {
id: attachment.id,
type: "document",
name: attachment.name,
content: [
{
type: "text",
text: `[PDF Document: ${attachment.name}]\nBase64 data: ${base64Data.substring(0, 50)}...`
}
],
status: { type: "complete" },
};
}
async remove(attachment: PendingAttachment): Promise<void> {
// Cleanup if needed
}
private async fileToBase64(file: File): Promise<string> {
const arrayBuffer = await file.arrayBuffer();
const bytes = new Uint8Array(arrayBuffer);
let binary = "";
bytes.forEach(byte => {
binary += String.fromCharCode(byte);
});
return btoa(binary);
}
// Optional: Extract text from PDF using a library like pdf.js
private async extractTextFromPDF(file: File): Promise<string> {
// Implementation would use pdf.js or similar
// This is a placeholder
return "Extracted PDF text content";
}
}
Using Custom Adapters
With LocalRuntime
When using LocalRuntime
, you need to handle images in your ChatModelAdapter
(the adapter that connects to your AI backend):
import { useLocalRuntime, ChatModelAdapter } from "@assistant-ui/react";
// This adapter connects LocalRuntime to your AI backend
const MyModelAdapter: ChatModelAdapter = {
async run({ messages, abortSignal }) {
// Convert messages to format expected by your vision-capable API
const formattedMessages = messages.map(msg => {
if (msg.role === "user" && msg.content.some(part => part.type === "image")) {
// Format for GPT-4V or similar vision models
return {
role: "user",
content: msg.content.map(part => {
if (part.type === "text") {
return { type: "text", text: part.text };
}
if (part.type === "image") {
return {
type: "image_url",
image_url: { url: part.image }
};
}
return part;
})
};
}
// Regular text messages
return {
role: msg.role,
content: msg.content
.filter(c => c.type === "text")
.map(c => c.text)
.join("\n")
};
});
// Send to your vision-capable API
const response = await fetch("/api/vision-chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ messages: formattedMessages }),
signal: abortSignal,
});
const data = await response.json();
return {
content: [{ type: "text", text: data.message }],
};
},
};
// Create runtime with vision image adapter
const runtime = useLocalRuntime(MyModelAdapter, {
adapters: {
attachments: new VisionImageAdapter()
},
});
With Vercel AI SDK
If you're using the Vercel AI SDK, images are handled automatically through experimental attachments:
// In your API route
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { messages } = await req.json();
const result = streamText({
model: openai("gpt-4-vision-preview"),
messages: messages.map(msg => {
if (msg.experimental_attachments?.length) {
// Images are automatically formatted for the model
return {
...msg,
experimental_attachments: msg.experimental_attachments
};
}
return msg;
}),
});
return result.toDataStreamResponse();
}
Advanced Features
Progress Updates
Provide real-time upload progress using async generators:
class UploadAttachmentAdapter implements AttachmentAdapter {
accept = "*/*";
async *add({ file }: { file: File }) {
const id = generateId();
// Initial pending state
yield {
id,
type: "file",
name: file.name,
file,
status: { type: "running", progress: 0 },
} as PendingAttachment;
// Simulate upload progress
for (let progress = 10; progress <= 90; progress += 10) {
await new Promise((resolve) => setTimeout(resolve, 100));
yield {
id,
type: "file",
name: file.name,
file,
status: { type: "running", progress },
} as PendingAttachment;
}
// Return final pending state
return {
id,
type: "file",
name: file.name,
file,
status: { type: "running", progress: 100 },
} as PendingAttachment;
}
async send(attachment: PendingAttachment): Promise<CompleteAttachment> {
// Upload the file and return complete attachment
const url = await this.uploadFile(attachment.file);
return {
id: attachment.id,
type: attachment.type,
name: attachment.name,
content: [
{
type: "file",
data: url, // or base64 data
mimeType: attachment.file.type,
},
],
status: { type: "complete" },
};
}
async remove(attachment: PendingAttachment): Promise<void> {
// Cleanup logic
}
private async uploadFile(file: File): Promise<string> {
// Your upload logic here
return "https://example.com/file-url";
}
}
Validation and Error Handling
Implement robust validation in your adapters:
class ValidatedImageAdapter implements AttachmentAdapter {
accept = "image/*";
maxSizeBytes = 5 * 1024 * 1024; // 5MB
async add({ file }: { file: File }): Promise<PendingAttachment> {
// Validate file size
if (file.size > this.maxSizeBytes) {
return {
id: generateId(),
type: "image",
name: file.name,
file,
status: {
type: "incomplete",
reason: "error",
error: new Error("File size exceeds 5MB limit"),
},
};
}
// Validate image dimensions
try {
const dimensions = await this.getImageDimensions(file);
if (dimensions.width > 4096 || dimensions.height > 4096) {
throw new Error("Image dimensions exceed 4096x4096");
}
} catch (error) {
return {
id: generateId(),
type: "image",
name: file.name,
file,
status: {
type: "incomplete",
reason: "error",
error,
},
};
}
// Return valid attachment
return {
id: generateId(),
type: "image",
name: file.name,
file,
status: { type: "running" },
};
}
private async getImageDimensions(file: File) {
// Implementation to check image dimensions
}
}
Multiple File Selection
Enable multi-file selection with custom limits:
const composer = useComposer();
const handleMultipleFiles = async (files: FileList) => {
const maxFiles = 5;
const filesToAdd = Array.from(files).slice(0, maxFiles);
for (const file of filesToAdd) {
await composer.addAttachment({ file });
}
};
Backend Integration
With Vercel AI SDK
Process attachments in your API route:
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
export async function POST(req: Request) {
const { messages } = await req.json();
// Process messages with attachments
const processedMessages = messages.map((msg) => {
if (msg.role === "user" && msg.experimental_attachments) {
// Handle attachments
const attachmentContent = msg.experimental_attachments
.map((att) => {
if (att.contentType.startsWith("image/")) {
return `[Image: ${att.name}]`;
}
return att.content;
})
.join("\n");
return {
...msg,
content: `${msg.content}\n\nAttachments:\n${attachmentContent}`,
};
}
return msg;
});
const result = streamText({
model: openai("gpt-4o"),
messages: processedMessages,
});
return result.toDataStreamResponse();
}
Custom Backend Handling
Implement your own attachment processing:
// In your attachment adapter
class ServerUploadAdapter implements AttachmentAdapter {
async send(attachment: PendingAttachment): Promise<CompleteAttachment> {
const formData = new FormData();
formData.append("file", attachment.file);
const response = await fetch("/api/upload", {
method: "POST",
body: formData,
});
const { url, id } = await response.json();
return {
id,
type: attachment.type,
name: attachment.name,
content: [
{
type: "image",
url,
},
],
status: { type: "complete" },
};
}
}
Runtime Support
Attachments work with all assistant-ui runtimes:
- AI SDK Runtime:
useChatRuntime
,useAssistantRuntime
- External Store:
useExternalStoreRuntime
- LangGraph:
useLangGraphRuntime
- Custom Runtimes: Any runtime implementing the attachment interface
The attachment system is designed to be extensible. You can create adapters for any file type, integrate with cloud storage services, or implement custom processing logic to fit your specific needs.
Best Practices
- File Size Limits: Always validate file sizes to prevent memory issues
- Type Validation: Verify file types match your
accept
pattern - Error Handling: Provide clear error messages for failed uploads
- Progress Feedback: Show upload progress for better UX
- Security: Validate and sanitize file content before processing
- Accessibility: Ensure attachment UI is keyboard navigable
Resources
- Attachment UI Components - UI implementation details
- API Reference - Detailed type definitions