Compose multi-agent AI systems with async event streaming, composable agent hierarchies, and built-in MCP & A2A integration.
LLM, ReAct, Sequential, Parallel, and Loop agents. Mix and match to build any orchestration pattern.
Async event-driven architecture with real-time streaming. Every agent yields typed events you can observe and react to.
Conduct entire agent orchestras declaratively with YAML. No Python wiring needed for complex pipelines.
Function tools, filesystem, shell, agent-as-tool, and long-running tool support. Extend agents with any capability.
Choreograph task execution with PlanReAct and TaskPlanner strategies. Agents that think before they act.
First-class Model Context Protocol and Agent-to-Agent protocol integration for cross-service harmonization.
Create powerful AI agents with just a few lines of Python. Stream events in real-time and compose complex orchestration patterns.
from orxhestra import LlmAgent, Runner from langchain_openai import ChatOpenAI agent = LlmAgent( name="assistant", model=ChatOpenAI(model="gpt-5.4"), instructions="You are a helpful assistant.", ) runner = Runner(agent=agent, app_name="my-app") async for event in runner.astream( user_id="user-1", session_id="session-1", new_message="Hello!", ): print(event.content)
Chat model agent with tools, instructions, and structured output
Reasoning + acting loop with automatic tool use
Runs sub-agents in order, like a pipeline
Runs sub-agents concurrently for maximum throughput
Repeats sub-agents until an exit condition is met
Connects to remote agents via the A2A protocol
# Define an entire coding agent in YAML defaults: model: provider: openai name: gpt-5.4 tools: filesystem: builtin: "filesystem" shell: builtin: "shell" agents: planner: type: llm instructions: Output actionable steps for the coder. coder: type: llm tools: [filesystem, shell] instructions: Execute the plan. Never ask the user. dev_loop: type: loop agents: [coder, reviewer] coordinator: type: sequential agents: [planner, dev_loop] main_agent: coordinator