Skip to main content
AgentFlow uses decorators as the primary way to define components. Each decorator handles schema generation, validation, and registration.

@agent

Configures an Agent subclass:
from framework.agents import Agent
from framework.decorators import agent

@agent(
    name="MyAgent",
    description="Does useful things",
    system_prompt="You are helpful.",
    enable_planning=True,
    llm_config={"model": "openai/gpt-4o", "temperature": 0.7},
)
class MyAgent(Agent):
    pass
The decorator stores an AgentConfig on MyAgent._agent_config. At startup, the AgentFactory uses this config to instantiate the agent.

@tool

Wraps a function as a callable tool:
from framework.decorators import tool

@tool(
    name="get_weather",
    description="Get current weather for a city",
    agent="MainAgent",
)
async def get_weather(city: str, units: str = "celsius") -> dict:
    """Fetch weather data."""
    return await weather_api.get(city, units)
What the decorator does:
  1. Inspects the function signature and type hints
  2. Generates a JSON schema for the LLM’s tool-calling interface
  3. Registers the tool in the global registry
  4. Assigns it to the specified agent

@sub_agent

Defines an agent that can be delegated to by a master:
from framework.agents import SubAgent
from framework.decorators import sub_agent

@sub_agent(
    name="ResearchAgent",
    description="Searches the web and compiles research",
    master_agents=["MainAgent"],
    system_prompt=RESEARCH_PROMPT,
)
class ResearchAgent(SubAgent):
    pass
The sub-agent appears as a callable tool in each master agent’s tool list.

Runtime Prompt Blocks

Agents can define dynamic prompt sections that are built at execution time:
@agent(name="MyAgent", ...)
class MyAgent(Agent):
    runtime_prompt_block_builders = {
        "available_skills": _build_skills_block,
    }
The builder function receives the execution context and returns prompt content to inject.