# Separate Server Integration URL: /docs/runtimes/mastra/separate-server-integration Run Mastra as a standalone server connected to your frontend. Run Mastra as a standalone server and connect your Next.js frontend (using assistant-ui) to its API endpoints. This approach separates your AI backend from your frontend application, allowing for independent development and scaling. Create Mastra Server Project \[#create-mastra-server-project] First, create a dedicated project for your Mastra server. Choose a directory separate from your Next.js/assistant-ui frontend project. Navigate to your chosen parent directory in the terminal and run the Mastra create command: ```bash npx create-mastra@latest ``` This command will launch an interactive wizard to help you scaffold a new Mastra project, including prompting you for a project name and setting up basic configurations. Follow the prompts to create your server project. For more detailed setup instructions, refer to the [official Mastra installation guide](https://mastra.ai/docs/getting-started/installation). Once the setup is complete, navigate into your new Mastra project directory (the name you provided during the setup): ```bash cd your-mastra-server-directory # Replace with the actual directory name ``` In the next steps you'll need to use the `@mastra/ai-sdk` package. Add it to your Mastra project: You now have a basic Mastra server project ready. Ensure you have configured your environment variables (e.g., `OPENAI_API_KEY`) within this Mastra server project, typically in a `.env.development` file, as required by the models you use. The `create-mastra` wizard might prompt you for some keys, but ensure all necessary keys for your chosen models are present. Define the Agent \[#define-the-agent] Next, let's define an agent within your Mastra server project. We'll create a `chefAgent` similar to the one used in the full-stack guide. Open or create the agent file (e.g., `src/mastra/agents/chefAgent.ts` within your Mastra project) and add the following code: ```typescript title="src/mastra/agents/chefAgent.ts" import { Agent } from "@mastra/core/agent"; export const chefAgent = new Agent({ name: "chef-agent", instructions: "You are Michel, a practical and experienced home chef. " + "You help people cook with whatever ingredients they have available.", model: "openai/gpt-4o-mini", }); ``` This defines the agent's behavior, but it's not yet active in the Mastra server. Register the Agent \[#register-the-agent] Now, you need to register the `chefAgent` with your Mastra instance so the server knows about it. Open your main Mastra configuration file (this is often `src/mastra/index.ts` in projects created with `create-mastra`). Import the `chefAgent` and add it to the `agents` object when initializing Mastra: ```typescript title="src/mastra/index.ts" import { Mastra } from "@mastra/core"; import { chefAgent } from "./agents/chefAgent"; // Adjust path if necessary export const mastra = new Mastra({ agents: { chefAgent }, }); ``` Make sure you adapt this code to fit the existing structure of your `src/mastra/index.ts` file generated by `create-mastra`. The key is to import your agent and include it in the `agents` configuration object. Register the Chat Route \[#register-the-chat-route] Still inside `src/mastra/index.ts`, register a chat route for the `chefAgent` now. You can do this by using `chatRoute()` from `@mastra/ai-sdk`. You need to place this inside `server.apiRoutes` of your Mastra configuration: ```typescript title="src/mastra/index.ts" {3,7-13} import { Mastra } from "@mastra/core"; import { chefAgent } from "./agents/chefAgent"; import { chatRoute } from "@mastra/ai-sdk"; export const mastra = new Mastra({ agents: { chefAgent }, server: { apiRoutes: [ chatRoute({ path: "/chat/:agentId", }), ], }, }); ``` Make sure you adapt this code to fit the existing structure of your `src/mastra/index.ts` file generated by `create-mastra`. This will make all agents available in AI SDK-compatible formats, including the `chefAgent` at the endpoint `/chat/chefAgent`. Run the Mastra Server \[#run-the-mastra-server] With the agent defined and registered, start the Mastra development server: ```bash npm2yarn npm run dev ``` By default, the Mastra server will run on `http://localhost:4111`. Keep this server running for the next steps where we'll set up the assistant-ui frontend to connect to it. Initialize assistant-ui Frontend \[#initialize-assistant-ui-frontend] Now, set up your frontend application using assistant-ui. Navigate to a **different directory** from your Mastra server project. You can either create a new Next.js project or use an existing one. Inside your frontend project directory, run one of the following commands: ```sh title="New Project" npx assistant-ui@latest create ``` ```sh title="Existing Project" npx assistant-ui@latest init ``` This command installs the necessary assistant-ui dependencies and sets up basic configuration files, including a default chat page and an API route (`app/api/chat/route.ts`). For detailed setup instructions for assistant-ui, including manual setup steps, please refer to the main [Getting Started guide](/docs). In the next step, we will configure this frontend to communicate with the separate Mastra server instead of using the default API route. Configure Frontend API Endpoint \[#configure-frontend-api-endpoint] The default assistant-ui setup configures the chat runtime to use a local API route (`/api/chat`) within the Next.js project. Since our Mastra agent is running on a separate server, we need to update the frontend to point to that server's endpoint. Open the file in your assistant-ui frontend project that contains the `useChatRuntime` hook (usually `app/assistant.tsx` or `src/app/assistant.tsx`). Find the `useChatRuntime` hook and change the `api` property to the full URL of your Mastra agent's stream endpoint: ```tsx {8} title="app/assistant.tsx" "use client"; // Rest of the imports... export const Assistant = () => { const runtime = useChatRuntime({ transport: new AssistantChatTransport({ api: "http://localhost:4111/chat/chefAgent", }), }); // Rest of the component... }; ``` Replace `"http://localhost:4111/chat/chefAgent"` with the actual URL if your Mastra server runs on a different port or host, or if your agent has a different name. Now, the assistant-ui frontend will send chat requests directly to your running Mastra server. Since the frontend no longer uses the local `/api/chat` route created by the `init` command, you can safely delete the `app/api/chat/route.ts` (or `src/app/api/chat/route.ts`) file from your frontend project. Run the Frontend Application \[#run-the-frontend-application] You're ready to connect the pieces! Make sure your separate Mastra server is still running (from Step 4). In your assistant-ui frontend project directory, start the Next.js development server: ```bash npm2yarn npm run dev ``` Open your browser to `http://localhost:3000` (or the port specified in your terminal for the frontend app). You should now be able to interact with your `chefAgent` through the assistant-ui chat interface. The frontend will make requests to your Mastra server running on `http://localhost:4111`. Congratulations! You have successfully integrated Mastra with assistant-ui using a separate server approach. Your assistant-ui frontend now communicates with a standalone Mastra agent server. This setup provides a clear separation between your frontend and AI backend. To explore more advanced Mastra features like memory, tools, workflows, and deployment options, please refer to the [official Mastra documentation](https://mastra.ai/docs).