Skip to main content
Multi-agent systems enable specialized agents to work together on complex tasks. Each agent focuses on a specific domain and coordinates through orchestration patterns.

Multi-Agent System Design Patterns

There are many ways to design multi‑agent systems, but we commonly see two broadly applicable patterns:
  • Manager (agents as tools): A central manager/orchestrator invokes specialized sub‑agents as tools and retains control of the conversation.
  • Handoffs: Peer agents hand off control to a specialized agent that takes over the conversation. This is decentralized.
This guide focuses on the Manager pattern (agents as tools), which is the most common and straightforward approach.

Installation

npm i langchain @langchain/openai zod

Manager Pattern (Agents as Tools)

In the Manager pattern, a central orchestrator invokes specialized agents as tools. The orchestrator retains control throughout the conversation and decides when to call each specialized agent.

Creating Specialized Agents

Each specialized agent focuses on a specific task:
import { tool } from "langchain";
import { ChatOpenAI } from "@langchain/openai";
import * as z from "zod";

const llm = new ChatOpenAI({
  model: process.env.OPENAI_MODEL || "gpt-4o-mini",
  apiKey: process.env.OPENAI_API_KEY,
  configuration: {
    baseURL: process.env.OPENAI_BASE_URL,
  },
});

const writePoem = tool(
  async ({ topic }) => {
    return (
      await llm.invoke([
        {
          role: "system",
          content:
            "You are an expert poet. Write a poem about the topic provided.",
        },
        {
          role: "user",
          content: topic,
        },
      ])
    ).content as string;
  },
  {
    name: "writePoem",
    description: "Write a poem",
    schema: z.object({
      topic: z.string().describe("The topic of the poem"),
    }),
  }
);

const writeStory = tool(
  async ({ topic }) => {
    return (
      await llm.invoke([
        {
          role: "system",
          content:
            "You are an expert storyteller. Write a story about the topic provided.",
        },
        {
          role: "user",
          content: topic,
        },
      ])
    ).content as string;
  },
  {
    name: "writeStory",
    description: "Write a story",
    schema: z.object({
      topic: z.string().describe("The topic of the story"),
    }),
  }
);

const writeJoke = tool(
  async ({ topic }) => {
    return (
      await llm.invoke([
        {
          role: "system",
          content:
            "You are an expert comedian. Write a joke about the topic provided.",
        },
        {
          role: "user",
          content: topic,
        },
      ])
    ).content as string;
  },
  {
    name: "writeJoke",
    description: "Write a joke",
    schema: z.object({
      topic: z.string().describe("The topic of the joke"),
    }),
  }
);

Creating the Orchestrator

The orchestrator coordinates the workflow by delegating to specialized agents. It wraps specialized agents as tools:
import { createAgent } from "langchain";

const agent = createAgent({
  model: llm,
  systemPrompt:
    "You are a useful helper who can write stories, poems, and jokes. Decide which tool to use based on the user's request.",
  tools: [writePoem, writeStory, writeJoke],
});

const res = await agent.invoke({
  messages: [{ role: "user", content: "Tell me a joke about cats" }],
});

res.messages.forEach((message) =>
  console.log(`[${message.type}]: ${message.content}`)
);

Best Practices

  1. Clear Boundaries: Each agent should have a specific, focused responsibility
  2. Efficient Communication: Minimize overhead between agents using appropriate patterns
  3. Structured State: Use typed state classes when agents need to share data

Next Steps