Single source of truth for onboarding. All install and first-run instructions live here. Runtime-specific configuration lives in runtime guides. Architecture concepts live in architecture.md.
Transform a vague idea into a verified, working codebase -- with any AI coding agent.
No Python install required. Run these three commands to go from idea to execution:
1. Install the plugin (in your terminal):
claude plugin marketplace add Q00/ouroboros
claude plugin install ouroboros@ouroboros2. Set up and build (inside a Claude Code session -- start one with claude):
ooo setup
ooo interview "Build a task management CLI"
ooo run
That's it. ooo interview runs a Socratic interview that auto-generates a seed spec, and ooo run executes it.
ooocommands are Claude Code skills. They only work inside an active Claude Code session.ooo setupregisters the MCP server globally (one-time) and optionally configures your project.
Use this path if you prefer a standalone terminal workflow, or are using a non-Claude runtime (e.g., Codex CLI, OpenCode).
Requires Python >= 3.12.
# Install
pip install ouroboros-ai
# Set up
ouroboros setup
# Run a seed spec
ouroboros run ~/.ouroboros/seeds/seed_abc123.yamlNote: The standalone CLI interview is invoked via
ouroboros init start "your context"(notooo interview, which is Claude Code-specific). The interview flow is identical across both tools. Power users can also author seed YAML files directly — see the Seed Authoring Guide.
Tip:
ouroboros runrequires a path to a seed YAML file as a positional argument (e.g.,ouroboros run ~/.ouroboros/seeds/seed_<id>.yaml).
# Terminal
claude plugin marketplace add Q00/ouroboros
claude plugin install ouroboros@ouroborosThen inside a Claude Code session:
ooo setup
ooo help # verify installation
No Python, pip, or API key configuration needed -- Claude Code handles the runtime.
pip install ouroboros-ai # Base package (core engine)
pip install ouroboros-ai[claude] # + Claude Code runtime deps (anthropic, claude-agent-sdk)
pip install ouroboros-ai[litellm] # + LiteLLM multi-provider support (100+ models)
pip install ouroboros-ai[mcp] # + MCP server/client runtime support
pip install ouroboros-ai[tui] # + Textual terminal UI
pip install ouroboros-ai[all] # Everything (claude + litellm + mcp + tui + dashboard)
ouroboros --version # verify CLIWhich extra do I need? If you only use Claude Code as your runtime,
ouroboros-ai[claude]is sufficient. For multi-model support via LiteLLM, useouroboros-ai[litellm]or just grab everything withouroboros-ai[all]. Legacy note:ouroboros-ai[dashboard]is still accepted as a compatibility extra during the extras transition.
One-liner alternative (auto-detects your runtime and installs matching extras):
curl -fsSL https://raw.githubusercontent.com/Q00/ouroboros/main/scripts/install.sh | bashgit clone https://github.com/Q00/ouroboros
cd ouroboros
uv sync # base dependencies only
uv sync --all-extras # or: include all optional extras
uv run ouroboros --version # verify CLISee CONTRIBUTING.md for the full contributor setup (linting, testing, pre-commit hooks).
| Path | Requirements |
|---|---|
Claude Code (ooo) |
Claude Code with plugin support |
Standalone CLI (ouroboros) |
Python >= 3.12, API key (Anthropic or OpenAI) |
| Codex CLI backend | Python >= 3.12, npm install -g @openai/codex, OpenAI API key with access to GPT-5.4 |
| OpenCode backend | Python >= 3.12, opencode on PATH, provider configured in OpenCode |
# Claude-backed flows
export ANTHROPIC_API_KEY="your-anthropic-key"
# Codex-backed flows
export OPENAI_API_KEY="your-openai-key"Claude Code plugin users: your Claude Code session provides credentials automatically. No export needed.
ouroboros setup creates ~/.ouroboros/config.yaml with sensible defaults. To edit manually:
orchestrator:
runtime_backend: claude # claude | codex | opencode
llm:
backend: claude_code # claude_code | codex | litellm
logging:
level: infoFor Codex CLI, the recommended documented baseline is GPT-5.4 with medium reasoning effort. Put Ouroboros per-role overrides in ~/.ouroboros/config.yaml, not in ~/.codex/config.toml:
# ~/.ouroboros/config.yaml
orchestrator:
runtime_backend: codex
codex_cli_path: /usr/local/bin/codex
llm:
backend: codex
qa_model: gpt-5.4
clarification:
default_model: gpt-5.4
evaluation:
semantic_model: gpt-5.4
consensus:
advocate_model: gpt-5.4
devil_model: gpt-5.4
judge_model: gpt-5.4ouroboros setup --runtime codex uses ~/.codex/config.toml only for the Codex MCP/env hookup and installs managed Ouroboros rules/skills into ~/.codex/.
# Override the runtime backend (highest priority)
export OUROBOROS_AGENT_RUNTIME=codexResolution order: OUROBOROS_AGENT_RUNTIME env var > config.yaml > auto-detection during ouroboros setup.
For the full list of configuration keys, see Configuration Reference.
This tutorial walks through a complete workflow. Examples use ooo skills (Claude Code); CLI equivalents are shown in callouts for terminal-based workflows.
Inside a Claude Code session:
ooo interview "I want to build a personal finance tracker"
CLI note: You can also run interviews from the terminal with
ouroboros init start --llm-backend <backend> "your idea"(where<backend>isclaude_code,codex,opencode, orlitellm). For in-agentooo interviewusage: Claude Code works out-of-the-box; Codex CLI and OpenCode requireouroboros setup --runtime <codex|opencode>first to register the MCP server.
The Socratic Interviewer asks clarifying questions:
- "What platforms do you want to track?" (Bank accounts, credit cards, investments)
- "Do you need budgeting features?" (Yes, with category tracking)
- "Mobile app or web-based?" (Desktop-only with web export)
- "Data storage preference?" (SQLite, local file)
Answer until the ambiguity score drops below 0.2. The interview then auto-generates a seed spec:
# Auto-generated seed (example)
goal: "Build a personal finance tracker with SQLite storage"
constraints:
- "Desktop application only"
- "Category-based budgeting"
- "Export to CSV/Excel"
acceptance_criteria:
- "Track income and expenses"
- "Categorize transactions automatically"
- "Generate monthly reports"
- "Set and monitor budgets"
metadata:
ambiguity_score: 0.15
seed_id: "seed_abc123"ooo run
CLI equivalent:
ouroboros run ~/.ouroboros/seeds/seed_abc123.yaml(requires the seed file path as a positional argument)
Ouroboros decomposes the seed into tasks via the Double Diamond (Discover -> Define -> Design -> Deliver) and executes them through your configured runtime backend.
Open a second terminal to watch progress in the TUI dashboard:
ouroboros monitorThe dashboard shows:
- Double Diamond phase progress
- Acceptance criteria tree with live status
- Cost, drift, and agent activity
See TUI Usage Guide for keyboard shortcuts and screen details.
ooo run (or ouroboros run) prints a session summary with the QA verdict when complete.
Useful follow-ups:
ooo evaluate # Re-run 3-stage evaluation
ooo status # Check drift and session state
ooo evolve # Start evolutionary refinement loop
CLI equivalent:
ouroboros run seed.yaml --resume <session_id>to resume,ouroboros run seed.yaml --debugfor verbose output.
ooo interview "Build a REST API for a blog"
ooo run
ooo interview "User registration fails with email validation"
ooo run
ooo interview "Add real-time notifications to the chat app"
ooo run
Terminal users: Run interviews from the terminal with
ouroboros init start --llm-backend <backend> "your idea", then execute withouroboros run workflow <seed_file>. (Separate from in-agentooousage; terminal flows don't require MCP registration.)
Ouroboros delegates code execution to a pluggable runtime backend. Three ship out of the box:
| Claude Code | Codex CLI | OpenCode | |
|---|---|---|---|
| Best for | Claude Code users; subscription billing | OpenAI ecosystem; pay-per-token billing | Multi-provider flexibility; open-source tooling |
| Install | pip install ouroboros-ai[claude] |
pip install ouroboros-ai + npm install -g @openai/codex |
pip install ouroboros-ai + opencode on PATH |
| Skill shortcuts | ooo inside Claude Code |
ooo after ouroboros setup --runtime codex installs managed Codex skills |
ooo after ouroboros setup --runtime opencode |
| Config value | claude |
codex |
opencode |
All three backends run the same core workflow engine (seed execution, TUI). However, user-facing commands still differ: Claude Code has native in-session ooo workflows, while Codex CLI and OpenCode rely on ouroboros setup --runtime <backend> to configure the integration. The ouroboros CLI remains the most universal terminal path, and some advanced operations are still MCP/Claude-only.
For backend-specific configuration:
# Check skill is installed
claude plugin list
# Reinstall if needed
claude plugin install ouroboros@ouroboros --forcepython --version # Must be >= 3.12
pip install --force-reinstall ouroboros-ai
ouroboros --versionexport ANTHROPIC_API_KEY="your-key" # or OPENAI_API_KEY
env | grep -E 'ANTHROPIC|OPENAI' # verifyouroboros mcp info
ouroboros mcp serveexport TERM=xterm-256color
ouroboros tui monitorInside Claude Code:
ooo unstuck
From terminal:
ouroboros run seed.yaml --resume <session_id>
ouroboros cancel execution <session_id>| Issue | Solution |
|---|---|
| Skill not loaded | claude plugin install ouroboros@ouroboros --force |
| CLI not found | pip install ouroboros-ai |
| API errors | Check ANTHROPIC_API_KEY / OPENAI_API_KEY |
| TUI blank | export TERM=xterm-256color |
| High costs | Reduce seed scope or use a lower model tier |
| Execution stuck | ooo unstuck or ouroboros run seed.yaml --resume <id> |
- Be specific -- "build a Twitter clone with real-time messaging" beats "build a social app"
- State constraints early -- budget, timeline, technical limitations
- Define success -- clear acceptance criteria produce better seeds
- Include non-functional requirements -- performance, security, scalability
- Define boundaries -- what is in scope and what is not
- Specify integrations -- APIs, databases, third-party services
- Validate first --
ouroboros run seed.yaml --dry-runchecks YAML and schema before executing - Monitor with the TUI -- run
ouroboros monitorin a separate terminal during long workflows - Keep QA enabled -- post-execution QA runs automatically unless you pass
--no-qa
- Seed Authoring Guide -- advanced seed customization
- Evaluation Pipeline -- understand the 3-stage verification gate
- TUI Usage Guide -- dashboard screens and keyboard shortcuts
- Architecture -- system design and component overview
- Configuration Reference -- all config keys and defaults
- Claude Code runtime guide -- backend-specific setup
- Codex CLI runtime guide -- backend-specific setup
- OpenCode runtime guide -- backend-specific setup
Need help? Open an issue on GitHub.