Upload chat attachments to object storage with a presigned-URL AttachmentAdapter.
The bundled SimpleImageAttachmentAdapter and SimpleTextAttachmentAdapter inline files as data URLs. That works for small images and text files but breaks down for large files, persistent threads, and serverless body-size limits. This page shows the production pattern: an AttachmentAdapter that uploads to object storage via a presigned URL and sends only the URL to the model.
For the adapter contract itself, see adapters. This page is the storage variant.
How it works
composer add ──► POST /api/upload (presign) ──► PUT to object storage
│
▼
PendingAttachment with the public URL
│
composer send ◄────────────────────────────────────┘
│
└─► send() emits a content part with the URL
│
▼
AI SDK passes URL to the modelThree ideas to internalize before reading the code:
adduploads, returnsrequires-action. The composer holds the file with the user's other input.sendfinalizes. When the user submits, you mark the attachmentcompleteand emit acontentpart with a stable URL.removedeletes. Optional. Runs only if the user removes the attachment from the composer before sending; messages already sent are immutable.
Setup
Build the presign endpoint
The browser cannot mint upload credentials safely; the server creates a short-lived presigned URL. The example below uses S3, but R2, GCS, and Vercel Blob have nearly identical shapes.
AWS_REGION=us-east-1
S3_BUCKET=my-chat-uploads
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...import { S3Client, PutObjectCommand, DeleteObjectCommand } from "@aws-sdk/client-s3";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";
import { auth } from "@/auth";
import { generateId } from "ai";
const s3 = new S3Client({ region: process.env.AWS_REGION });
export async function POST(req: Request) {
const session = await auth();
if (!session?.user) return new Response(null, { status: 401 });
const { name, contentType } = (await req.json()) as {
name: string;
contentType: string;
};
const key = `chat-uploads/${generateId()}-${name}`;
const url = await getSignedUrl(
s3,
new PutObjectCommand({
Bucket: process.env.S3_BUCKET!,
Key: key,
ContentType: contentType,
}),
{ expiresIn: 60 },
);
const publicUrl = `https://${process.env.S3_BUCKET}.s3.amazonaws.com/${key}`;
return Response.json({ uploadUrl: url, publicUrl, key });
}Authenticate the request here. A presigned URL with a 60-second expiry is still a write capability; only authenticated users should mint them.
For remove() to work end to end, expose a delete route too:
import { S3Client, DeleteObjectCommand } from "@aws-sdk/client-s3";
import { auth } from "@/auth";
const s3 = new S3Client({ region: process.env.AWS_REGION });
export async function DELETE(
_req: Request,
{ params }: { params: Promise<{ key: string }> },
) {
const session = await auth();
if (!session?.user) return new Response(null, { status: 401 });
const { key } = await params;
await s3.send(
new DeleteObjectCommand({
Bucket: process.env.S3_BUCKET!,
Key: decodeURIComponent(key),
}),
);
return new Response(null, { status: 204 });
}Implement the adapter
import type {
AttachmentAdapter,
PendingAttachment,
CompleteAttachment,
} from "@assistant-ui/react";
type Pending = PendingAttachment & { key: string; url: string };
export const attachmentAdapter: AttachmentAdapter = {
accept: "image/*,application/pdf",
async add({ file }) {
const presign = await fetch("/api/upload", {
method: "POST",
headers: { "content-type": "application/json" },
body: JSON.stringify({ name: file.name, contentType: file.type }),
}).then((r) => r.json());
const put = await fetch(presign.uploadUrl, {
method: "PUT",
headers: { "content-type": file.type },
body: file,
});
if (!put.ok) throw new Error(`upload failed: ${put.status}`);
const pending: Pending = {
id: presign.key,
type: file.type.startsWith("image/") ? "image" : "document",
name: file.name,
contentType: file.type,
file,
url: presign.publicUrl,
key: presign.key,
status: { type: "requires-action", reason: "composer-send" },
};
return pending;
},
async send(attachment): Promise<CompleteAttachment> {
const { url, type, name, contentType } = attachment as Pending;
const content =
type === "image"
? [{ type: "image" as const, image: url }]
: [
{
type: "file" as const,
filename: name,
mimeType: contentType ?? "application/octet-stream",
data: url,
},
];
return { ...attachment, status: { type: "complete" }, content };
},
async remove(attachment) {
await fetch(`/api/upload/${(attachment as Pending).key}`, {
method: "DELETE",
});
},
};The shape of content matches the AI SDK part types. Use image for images and file for everything else; AI SDK forwards them to multimodal models that accept URL-based content.
Wire it into the runtime
"use client";
import { AssistantRuntimeProvider } from "@assistant-ui/react";
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
import { attachmentAdapter } from "./attachment-adapter";
export function MyProvider({ children }: { children: React.ReactNode }) {
const runtime = useChatRuntime({
adapters: { attachments: attachmentAdapter },
});
return (
<AssistantRuntimeProvider runtime={runtime}>
{children}
</AssistantRuntimeProvider>
);
}The composer paperclip button appears automatically. The accept string filters the file picker.
Run and verify
Pick a file. Check:
- Network tab shows
POST /api/uploadreturning a presigned URL, then aPUTto the storage host. - The composer shows a thumbnail / chip while the file is in the
requires-actionstate. - Submitting the message sends the URL (not the file bytes) in the request body to
/api/chat. - Object storage has the file under
chat-uploads/....
Variants
The add() body is the only thing that changes per provider. Tabs below show the upload step; everything else (presign endpoint, send, remove, runtime wiring) is identical.
The example above. R2 is API-compatible with S3, so swap the endpoint:
const s3 = new S3Client({
region: "auto",
endpoint: `https://${process.env.R2_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});Vercel Blob uses client uploads with a server-issued token, not raw presigned PUTs.
import { handleUpload, type HandleUploadBody } from "@vercel/blob/client";
export async function POST(req: Request) {
const body = (await req.json()) as HandleUploadBody;
const json = await handleUpload({
body,
request: req,
onBeforeGenerateToken: async () => ({
allowedContentTypes: ["image/*", "application/pdf"],
}),
onUploadCompleted: async () => {},
});
return Response.json(json);
}In the adapter's add, call upload(name, file, { access: "public", handleUploadUrl: "/api/upload" }) from @vercel/blob/client and use the returned url for publicUrl.
Uploadthing wraps both halves. Define a file router on the server, then call useUploadThing from the adapter.
import { createUploadthing, type FileRouter } from "uploadthing/next";
const f = createUploadthing();
export const ourFileRouter = {
chatAttachment: f({ image: { maxFileSize: "8MB" }, pdf: { maxFileSize: "16MB" } })
.middleware(async () => ({ /* auth */ }))
.onUploadComplete(async () => {}),
} satisfies FileRouter;In the adapter's add, call Uploadthing's client upload and read url from the result. The send/remove shape is unchanged.
Notes
- Persistence. A URL the model sees on Monday must still resolve next week if the thread is reloaded. Either use storage with no expiry on the public URL, or have your history adapter regenerate signed URLs on
load. Don't store presigned URLs in the message row. - Cleanup.
removeruns only if the user dismisses the attachment before sending. Files in sent messages are kept; deleting them later breaks message rendering. acceptstring. Comma-separated MIME types or extensions, including wildcards (image/*). The composer file picker uses this directly. To handle multiple type families with different upload paths, useCompositeAttachmentAdapter.- Server-side validation. Even with presigning, validate
contentTypeand file size on the server. The browser controls what it sends; trust nothing.