Display stream timing metadata like duration, tokens per second, and time to first token.
Display stream performance metrics — duration, tokens per second, TTFT — on assistant messages.
Reading Timing Data
Use useMessageTiming() inside a message component to access timing data:
import { useMessageTiming } from "@assistant-ui/react";
const MessageTimingDisplay: FC = () => {
const timing = useMessageTiming();
if (!timing?.totalStreamTime) return null;
const formatMs = (ms: number) =>
ms < 1000 ? `${Math.round(ms)}ms` : `${(ms / 1000).toFixed(1)}s`;
return (
<span className="text-xs text-muted-foreground">
{formatMs(timing.totalStreamTime)}
{timing.tokensPerSecond !== undefined &&
` · ${timing.tokensPerSecond.toFixed(1)} tok/s`}
</span>
);
};Place it inside MessagePrimitive.Root, typically near the action bar:
const AssistantMessage: FC = () => {
return (
<MessagePrimitive.Root>
<MessagePrimitive.Parts components={{ ... }} />
<ActionBarPrimitive.Root>
<ActionBarPrimitive.Copy />
<ActionBarPrimitive.Reload />
<MessageTimingDisplay />
</ActionBarPrimitive.Root>
</MessagePrimitive.Root>
);
};MessageTiming Fields
| Field | Type | Description |
|---|---|---|
streamStartTime | number | Unix timestamp when stream started |
firstTokenTime | number? | Time to first text token (ms) |
totalStreamTime | number? | Total stream duration (ms) |
tokenCount | number? | Estimated or actual token count |
tokensPerSecond | number? | Throughput (tokens/sec) |
totalChunks | number | Total stream chunks received |
toolCallCount | number | Number of tool calls |
Runtime Support
| Runtime | Supported | Notes |
|---|---|---|
| DataStream | Yes | Automatic via AssistantMessageAccumulator |
AI SDK (useChatRuntime) | Yes | Automatic via client-side tracking |
Local (useLocalRuntime) | Yes | Pass timing in ChatModelRunResult.metadata |
| ExternalStore | Yes | Pass timing in ThreadMessageLike.metadata |
| LangGraph | No | Not yet implemented |
| AG-UI | No | Not yet implemented |
DataStream
Timing is tracked automatically inside AssistantMessageAccumulator. No setup required.
import { useDataStreamRuntime } from "@assistant-ui/react-data-stream";
const runtime = useDataStreamRuntime({ api: "/api/chat" });
// useMessageTiming() works out of the boxAI SDK (useChatRuntime)
Timing is tracked automatically on the client side by observing streaming state transitions and content changes. Timing is finalized when each stream completes. tokenCount and tokensPerSecond are estimated from text length.
import { useChatRuntime } from "@assistant-ui/react-ai-sdk";
const runtime = useChatRuntime({ api: "/api/chat" });
// useMessageTiming() works out of the boxLocal (useLocalRuntime)
Pass timing in the metadata field of your ChatModelRunResult:
import type { ChatModelAdapter } from "@assistant-ui/react";
const myAdapter: ChatModelAdapter = {
async run({ messages, abortSignal }) {
const startTime = Date.now();
const result = await callMyAPI(messages, abortSignal);
const totalStreamTime = Date.now() - startTime;
return {
content: [{ type: "text", text: result.text }],
metadata: {
timing: {
streamStartTime: startTime,
totalStreamTime,
tokenCount: result.usage?.completionTokens,
tokensPerSecond:
result.usage?.completionTokens
? result.usage.completionTokens / (totalStreamTime / 1000)
: undefined,
totalChunks: 1,
toolCallCount: 0,
},
},
};
},
};ExternalStore (useExternalStoreRuntime)
Pass timing in the metadata.timing field of your ThreadMessageLike messages:
import type { ThreadMessageLike } from "@assistant-ui/react";
const message: ThreadMessageLike = {
role: "assistant",
content: [{ type: "text", text: fullText }],
metadata: {
timing: {
streamStartTime: startTime,
firstTokenTime,
totalStreamTime,
tokenCount,
tokensPerSecond,
totalChunks: chunks,
toolCallCount: 0,
},
},
};API Reference
useMessageTiming()
const timing: MessageTiming | undefined = useMessageTiming();Returns timing metadata for the current assistant message, or undefined for non-assistant messages or when no timing data is available.
Must be used inside a MessagePrimitive.Root context.