A ready-to-run example is available here!
Overview
Agent delegation allows a main agent to spawn multiple sub-agents and delegate tasks to them for parallel processing. Each sub-agent runs independently with its own conversation context and returns results that the main agent can consolidate and process further. This pattern is useful when:- Breaking down complex problems into independent subtasks
- Processing multiple related tasks in parallel
- Separating concerns between different specialized sub-agents
- Improving throughput for parallelizable work
How It Works
The delegation system consists of two main operations:1. Spawning Sub-Agents
Before delegating work, the agent must first spawn sub-agents with meaningful identifiers:- Gets a unique identifier that the agent specify (e.g., “lodging”, “activities”)
- Inherits the same LLM configuration as the parent agent
- Operates in the same workspace as the main agent
- Maintains its own independent conversation context
2. Delegating Tasks
Once sub-agents are spawned, the agent can delegate tasks to them:- Runs all sub-agent tasks in parallel using threads
- Blocks until all sub-agents complete their work
- Returns a single consolidated observation with all results
- Handles errors gracefully and reports them per sub-agent
Setting Up the DelegateTool
Register the Tool
Add to Agent Tools
Configure Maximum Sub-Agents (Optional)
The user can limit the maximum number of concurrent sub-agents:Tool Commands
spawn
Initialize sub-agents with meaningful identifiers. Parameters:command:"spawn"ids: List of string identifiers (e.g.,["research", "implementation", "testing"])
delegate
Send tasks to specific sub-agents and wait for results. Parameters:command:"delegate"tasks: Dictionary mapping sub-agent IDs to task descriptions
Ready-to-run Example
This example is available on GitHub: examples/01_standalone_sdk/25_agent_delegation.py
examples/01_standalone_sdk/25_agent_delegation.py
The model name should follow the LiteLLM convention:
provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.Using AgentDefinition for Declarative Sub-Agents
For simpler use cases, you can define sub-agents declaratively usingAgentDefinition instead of writing factory functions:
This example is available on GitHub: examples/01_standalone_sdk/42_file_based_subagents.py
examples/01_standalone_sdk/42_file_based_subagents.py
The model name should follow the LiteLLM convention:
provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o).
The LLM_API_KEY should be the API key for your chosen provider.
