Skip to main content
A ready-to-run example is available here!

Overview

Agent delegation allows a main agent to spawn multiple sub-agents and delegate tasks to them for parallel processing. Each sub-agent runs independently with its own conversation context and returns results that the main agent can consolidate and process further. This pattern is useful when:
  • Breaking down complex problems into independent subtasks
  • Processing multiple related tasks in parallel
  • Separating concerns between different specialized sub-agents
  • Improving throughput for parallelizable work

How It Works

The delegation system consists of two main operations:

1. Spawning Sub-Agents

Before delegating work, the agent must first spawn sub-agents with meaningful identifiers:
# Agent uses the delegate tool to spawn sub-agents
{
    "command": "spawn",
    "ids": ["lodging", "activities"]
}
Each spawned sub-agent:
  • Gets a unique identifier that the agent specify (e.g., “lodging”, “activities”)
  • Inherits the same LLM configuration as the parent agent
  • Operates in the same workspace as the main agent
  • Maintains its own independent conversation context

2. Delegating Tasks

Once sub-agents are spawned, the agent can delegate tasks to them:
# Agent uses the delegate tool to assign tasks
{
    "command": "delegate",
    "tasks": {
        "lodging": "Find the best budget-friendly areas to stay in London",
        "activities": "List top 5 must-see attractions and hidden gems in London"
    }
}
The delegate operation:
  • Runs all sub-agent tasks in parallel using threads
  • Blocks until all sub-agents complete their work
  • Returns a single consolidated observation with all results
  • Handles errors gracefully and reports them per sub-agent

Setting Up the DelegateTool

1

Register the Tool

from openhands.sdk.tool import register_tool
from openhands.tools.delegate import DelegateTool

register_tool("DelegateTool", DelegateTool)
2

Add to Agent Tools

from openhands.sdk import Tool
from openhands.tools.preset.default import get_default_tools

tools = get_default_tools(enable_browser=False)
tools.append(Tool(name="DelegateTool"))

agent = Agent(llm=llm, tools=tools)
3

Configure Maximum Sub-Agents (Optional)

The user can limit the maximum number of concurrent sub-agents:
from openhands.tools.delegate import DelegateTool

class CustomDelegateTool(DelegateTool):
    @classmethod
    def create(cls, conv_state, max_children: int = 3):
        # Only allow up to 3 sub-agents
        return super().create(conv_state, max_children=max_children)

register_tool("DelegateTool", CustomDelegateTool)

Tool Commands

spawn

Initialize sub-agents with meaningful identifiers. Parameters:
  • command: "spawn"
  • ids: List of string identifiers (e.g., ["research", "implementation", "testing"])
Returns: A message indicating the sub-agents were successfully spawned. Example:
{
    "command": "spawn",
    "ids": ["research", "implementation", "testing"]
}

delegate

Send tasks to specific sub-agents and wait for results. Parameters:
  • command: "delegate"
  • tasks: Dictionary mapping sub-agent IDs to task descriptions
Returns: A consolidated message containing all results from the sub-agents. Example:
{
    "command": "delegate",
    "tasks": {
        "research": "Find best practices for async code",
        "implementation": "Refactor the MyClass class",
        "testing": "Write unit tests for the refactored code"
    }
}

Ready-to-run Example

This example is available on GitHub: examples/01_standalone_sdk/25_agent_delegation.py
examples/01_standalone_sdk/25_agent_delegation.py
"""
Agent Delegation Example

This example demonstrates the agent delegation feature where a main agent
delegates tasks to sub-agents for parallel processing.
Each sub-agent runs independently and returns its results to the main agent,
which then merges both analyses into a single consolidated report.
"""

import os

from pydantic import SecretStr

from openhands.sdk import (
    LLM,
    Agent,
    AgentContext,
    Conversation,
    Tool,
    get_logger,
)
from openhands.sdk.context import Skill
from openhands.sdk.tool import register_tool
from openhands.tools.delegate import (
    DelegateTool,
    DelegationVisualizer,
    register_agent,
)
from openhands.tools.preset.default import get_default_tools


ONLY_RUN_SIMPLE_DELEGATION = False

logger = get_logger(__name__)

# Configure LLM and agent
# You can get an API key from https://app.all-hands.dev/settings/api-keys
api_key = os.getenv("LLM_API_KEY")
assert api_key is not None, "LLM_API_KEY environment variable is not set."
model = os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929")
llm = LLM(
    model=model,
    api_key=SecretStr(api_key),
    base_url=os.environ.get("LLM_BASE_URL", None),
    usage_id="agent",
)

cwd = os.getcwd()

register_tool("DelegateTool", DelegateTool)
tools = get_default_tools(enable_browser=False)
tools.append(Tool(name="DelegateTool"))

main_agent = Agent(
    llm=llm,
    tools=tools,
)
conversation = Conversation(
    agent=main_agent,
    workspace=cwd,
    visualizer=DelegationVisualizer(name="Delegator"),
)

task_message = (
    "Forget about coding. Let's switch to travel planning. "
    "Let's plan a trip to London. I have two issues I need to solve: "
    "Lodging: what are the best areas to stay at while keeping budget in mind? "
    "Activities: what are the top 5 must-see attractions and hidden gems? "
    "Please use the delegation tools to handle these two tasks in parallel. "
    "Make sure the sub-agents use their own knowledge "
    "and dont rely on internet access. "
    "They should keep it short. After getting the results, merge both analyses "
    "into a single consolidated report.\n\n"
)
conversation.send_message(task_message)
conversation.run()

conversation.send_message(
    "Ask the lodging sub-agent what it thinks about Covent Garden."
)
conversation.run()

# Report cost for simple delegation example
cost_1 = conversation.conversation_stats.get_combined_metrics().accumulated_cost
print(f"EXAMPLE_COST (simple delegation): {cost_1}")

print("Simple delegation example done!", "\n" * 20)


# -------- Agent Delegation Second Part: User-Defined Agent Types --------

if ONLY_RUN_SIMPLE_DELEGATION:
    exit(0)


def create_lodging_planner(llm: LLM) -> Agent:
    """Create a lodging planner focused on London stays."""
    skills = [
        Skill(
            name="lodging_planning",
            content=(
                "You specialize in finding great places to stay in London. "
                "Provide 3-4 hotel recommendations with neighborhoods, quick "
                "pros/cons, "
                "and notes on transit convenience. Keep options varied by budget."
            ),
            trigger=None,
        )
    ]
    return Agent(
        llm=llm,
        tools=[],
        agent_context=AgentContext(
            skills=skills,
            system_message_suffix="Focus only on London lodging recommendations.",
        ),
    )


def create_activities_planner(llm: LLM) -> Agent:
    """Create an activities planner focused on London itineraries."""
    skills = [
        Skill(
            name="activities_planning",
            content=(
                "You design concise London itineraries. Suggest 2-3 daily "
                "highlights, grouped by proximity to minimize travel time. "
                "Include food/coffee stops "
                "and note required tickets/reservations."
            ),
            trigger=None,
        )
    ]
    return Agent(
        llm=llm,
        tools=[],
        agent_context=AgentContext(
            skills=skills,
            system_message_suffix="Plan practical, time-efficient days in London.",
        ),
    )


# Register user-defined agent types (default agent type is always available)
register_agent(
    name="lodging_planner",
    factory_func=create_lodging_planner,
    description="Finds London lodging options with transit-friendly picks.",
)
register_agent(
    name="activities_planner",
    factory_func=create_activities_planner,
    description="Creates time-efficient London activity itineraries.",
)

# Make the delegation tool available to the main agent
register_tool("DelegateTool", DelegateTool)

main_agent = Agent(
    llm=llm,
    tools=[Tool(name="DelegateTool")],
)
conversation = Conversation(
    agent=main_agent,
    workspace=cwd,
    visualizer=DelegationVisualizer(name="Delegator"),
)

task_message = (
    "Plan a 3-day London trip. "
    "1) Spawn two sub-agents: lodging_planner (hotel options) and "
    "activities_planner (itinerary). "
    "2) Ask lodging_planner for 3-4 central London hotel recommendations with "
    "neighborhoods, quick pros/cons, and transit notes by budget. "
    "3) Ask activities_planner for a concise 3-day itinerary with nearby stops, "
    "   food/coffee suggestions, and any ticket/reservation notes. "
    "4) Share both sub-agent results and propose a combined plan."
)

print("=" * 100)
print("Demonstrating London trip delegation (lodging + activities)...")
print("=" * 100)

conversation.send_message(task_message)
conversation.run()

conversation.send_message(
    "Ask the lodging sub-agent what it thinks about Covent Garden."
)
conversation.run()

# Report cost for user-defined agent types example
cost_2 = conversation.conversation_stats.get_combined_metrics().accumulated_cost
print(f"EXAMPLE_COST (user-defined agents): {cost_2}")

print("All done!")

# Full example cost report for CI workflow
print(f"EXAMPLE_COST: {cost_1 + cost_2}")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o). The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.

Using AgentDefinition for Declarative Sub-Agents

For simpler use cases, you can define sub-agents declaratively using AgentDefinition instead of writing factory functions:
This example is available on GitHub: examples/01_standalone_sdk/42_file_based_subagents.py
examples/01_standalone_sdk/42_file_based_subagents.py
"""Example: Defining a sub-agent inline with AgentDefinition.

Defines a grammar-checker sub-agent using AgentDefinition, registers it,
and delegates work to it from an orchestrator agent. The orchestrator then
asks the builtin default agent to judge the results.
"""

import os
from pathlib import Path

from openhands.sdk import (
    LLM,
    Agent,
    Conversation,
    Tool,
    agent_definition_to_factory,
    register_agent,
)
from openhands.sdk.subagent import AgentDefinition
from openhands.sdk.tool import register_tool
from openhands.tools.delegate import DelegateTool, DelegationVisualizer


# 1. Define a sub-agent using AgentDefinition
grammar_checker = AgentDefinition(
    name="grammar-checker",
    description="Checks documents for grammatical errors.",
    tools=["file_editor"],
    system_prompt="You are a grammar expert. Find and list grammatical errors.",
)

# 2. Register it in the delegate registry
register_agent(
    name=grammar_checker.name,
    factory_func=agent_definition_to_factory(grammar_checker),
    description=grammar_checker.description,
)

# 3. Set up the orchestrator agent with the DelegateTool
llm = LLM(
    model=os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929"),
    api_key=os.getenv("LLM_API_KEY"),
    base_url=os.getenv("LLM_BASE_URL"),
    usage_id="file-agents-demo",
)

register_tool("DelegateTool", DelegateTool)
main_agent = Agent(
    llm=llm,
    tools=[Tool(name="DelegateTool")],
)
conversation = Conversation(
    agent=main_agent,
    workspace=Path.cwd(),
    visualizer=DelegationVisualizer(name="Orchestrator"),
)

# 4. Ask the orchestrator to delegate to our agent
task = (
    "Please delegate to the grammar-checker agent and ask it to review "
    "the README.md file in search of grammatical errors.\n"
    "Then ask the default agent to judge the errors."
)
conversation.send_message(task)
conversation.run()

cost = conversation.conversation_stats.get_combined_metrics().accumulated_cost
print(f"\nTotal cost: ${cost:.4f}")
print(f"EXAMPLE_COST: {cost:.4f}")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o). The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.