One Command to an Agent Fleet: How We Made AI Infrastructure Install Like an App
Every powerful tool has the same disease: setup.
You find the project. You read the README. You clone the repo. You install dependencies. You configure environment variables. You set up the database. You provision API keys. You discover a version conflict. You fix the version conflict. You discover a second version conflict. You question your life choices. Eventually — forty-five minutes to two hours later — you have a working install. Maybe.
This is the onboarding tax, and it kills more promising tools than bad design ever will. The difference between a project with 10 users and 10,000 users is almost never the quality of the code. It's the distance between "this looks interesting" and "I have it running."
We decided to eliminate that distance entirely.
What Actually Shipped
Starting today, getting a full AI agent fleet running — 29 specialized agents, 100+ MCP tools, swarm coding, memory graph, knowledge base, marketplace access — works like this:
pip install aither-adk
aither onboard
That's it. Two commands. The onboard command scans your environment, detects what you have, tells you what's available, and offers to configure everything. No manual config files. No environment variables. No Docker prerequisite.
Want the full AitherOS stack instead of just the ADK? One-liner bootstrap via AitherZero:
# Linux / macOS
curl -sSL https://raw.githubusercontent.com/Aitherium/AitherZero/main/bootstrap.sh | bash
# Windows PowerShell
irm https://raw.githubusercontent.com/Aitherium/AitherZero/main/bootstrap.ps1 | iex
These are the real bootstrap scripts from the AitherZero repo — they detect your system, install dependencies (including PowerShell 7 if needed), clone the repo, and start services. Profiles range from minimal to full depending on how much you want running locally.
And if you just want the ADK CLI via your package manager:
pip install aither-adk # PyPI
brew install aither-adk # Homebrew
winget install Aitherium.ADK # Windows
npm install -g aither-adk # npm
Every path leads to the same place: a working aither CLI that can scaffold agents, connect to LLM backends, and optionally tap into a cloud fleet of specialized AI agents.
The Product Detection Problem
The hard part wasn't the installer. The hard part was figuring out what the user already has.
Someone installing AitherADK might have Ollama running locally with three models. Or vLLM on a GPU server. Or an API key for cloud inference. Or OpenClaw managing their agent workspace. Or some combination of all of these. The correct onboarding experience is different for each case.
So we built a ProductDetector that scans the local environment for everything relevant:
- AitherADK — CLI binary, Python module, saved config
- Ollama — binary, running service on port 11434
- vLLM — Python module, Docker container, running on common ports
- OpenClaw — workspace directory, config file, agent sessions, MCP servers
- AitherDesktop — platform-specific install paths, entitlement tokens
- AitherConnect — Chrome extension detection via manifest scanning
- GPU — nvidia-smi query for name and VRAM
- Account — saved API key, email, plan tier, tenant ID
The detector runs all of these checks and produces a scan with a property called integration_opportunities — a list of things the user could set up based on what they already have. OpenClaw installed but not connected to Aither? That's an opportunity. API key saved but no local backend? That's an opportunity. GPU detected but no vLLM? That's an opportunity.
The onboarding wizard then generates a personalized plan from these opportunities. No two users see the same steps.
The OpenClaw Bridge
This is the part we're most excited about.
OpenClaw is an agent framework that a lot of developers already use. We already had a compatibility layer — 68 tool translations, session detection, tier-based gating. But the connection was passive. If an OpenClaw user happened to point their MCP config at us, it worked. Nobody was doing that because nobody knew they could.
Now aither onboard detects OpenClaw automatically. If it finds the workspace, it says:
OPENCLAW DETECTED — Integration available!
Run 'aither integrate openclaw' to connect:
- 29 specialized AI agents
- 100+ MCP tools (code, memory, search)
- Swarm coding (11 agents in parallel)
- Memory graph + knowledge base
And aither integrate openclaw does the full bidirectional setup:
OpenClaw → Aither: Injects AitherOS as an MCP server into OpenClaw's config, giving users access to the entire agent fleet. Mode auto-detection picks local, cloud, or hybrid based on what's running.
Aither → OpenClaw: Registers OpenClaw as an external agent in AitherOS, so Aither's agents can delegate work back to OpenClaw sessions.
Soul migration: OpenClaw workspace files (SOUL.md, IDENTITY.md, AGENTS.md) get merged into AitherOS soul overlays, so personality and context carry across.
Fleet bridge: A config file describing all available agents and their capabilities gets written so OpenClaw knows what it can delegate to.
The entire integration takes under five seconds. After restarting OpenClaw, a user can say "ask demiurge to refactor this function" and it routes through the fleet automatically.
Three Modes of Access
The integration adapts to your setup:
Local mode — AitherOS running on your machine. OpenClaw talks to AitherNode on port 8080. Zero latency, zero cloud dependency, full privacy. Your data never leaves your hardware.
Cloud mode — No local AitherOS required. OpenClaw talks to mcp.aitherium.com. Requires an API key (free tier available). Good for laptops without GPUs or teams that want managed infrastructure.
Hybrid mode — Both. Local AitherOS handles what it can; cloud handles overflow. Automatically detected when both are available. Best of both worlds.
The mode is auto-detected based on what's running, but you can force it:
aither integrate openclaw --mode local
aither integrate openclaw --mode cloud
aither integrate openclaw --mode hybrid
Marketplace in Three Commands
The other gap we closed is the distance between building an agent and publishing it.
Before today, publishing an agent to the Elysium marketplace required registering via API, packaging manually, uploading to the gateway, and submitting a listing through a separate endpoint. Five or six HTTP calls, all authenticated, all with different payloads.
Now:
aither init my-agent
cd my-agent
aither publish
The publish command runs a validation pipeline first — checks for agent.py, parses config.yaml, scans for hardcoded secrets, counts tool definitions, checks for a README. If validation passes, it packages the project, registers with the gateway, and submits a marketplace listing.
This matters because agent ecosystems live or die on contribution friction. npm succeeded not because JavaScript was the best language, but because npm publish was the easiest way to share code. We want the same dynamic for AI agents.
You can also preview what would happen without actually publishing:
aither publish --dry-run
And configure the listing metadata:
aither publish --pricing free --tier agent --category engineering
The Full Stack Bootstrap
The ADK is for building individual agents. But if you want the full AitherOS operating system — all 91 services, 29 agent personas, 11 architectural layers, the dark factory, the training pipeline, the mesh network — that's what the AitherZero bootstrap scripts are for.
The PowerShell bootstrap handles Windows deployments:
# Install everything — auto-detects whether to fresh-install or initialize
irm https://raw.githubusercontent.com/Aitherium/AitherZero/main/bootstrap.ps1 | iex
# With a specific profile
.\bootstrap.ps1 -InstallProfile Full
# Non-interactive for CI/CD
.\bootstrap.ps1 -NonInteractive -AutoInstallDeps
The bash bootstrap handles Linux and macOS:
# Standard install
curl -sSL https://raw.githubusercontent.com/Aitherium/AitherZero/main/bootstrap.sh | bash
# Set profile via environment
AITHERZERO_PROFILE=full curl -sSL https://raw.githubusercontent.com/Aitherium/AitherZero/main/bootstrap.sh | bash
Both scripts intelligently detect whether they're doing a fresh install or initializing an existing checkout. They handle PowerShell 7 installation, Docker setup, GPU dependencies, environment configuration, and service startup. Profiles control how much gets deployed:
- Minimal — Core services only (Node, Genesis, Pulse)
- Standard — Core + agents + memory + cognition
- Developer — Standard + training + mesh + monitoring
- Full — Everything including GPU inference, creative pipeline, and mesh networking
After bootstrap completes, the ADK auto-discovers the local AitherOS installation and connects to it. OpenClaw integration detects it too. Everything wires together automatically.
The Self-Service API
Everything the CLI does is also available as an API when Genesis is running. Nine endpoints under /api/v1/onboard/:
- GET /detect — Full product scan (what's installed, what's running, what's missing)
- POST /plan — Personalized onboarding plan with ordered steps
- POST /execute — Run a single onboarding step
- POST /quickstart — One-shot: register, configure, integrate everything
- GET /openclaw — OpenClaw detection and integration status
- POST /openclaw/integrate — Run the full integration
- GET /openclaw/mcp-config — Preview the MCP config that would be injected
- GET /openclaw/fleet — See all available agents and their capabilities
- GET /install-commands — Package manager commands for any product
This means Veil (the web dashboard) can offer the same onboarding experience through a browser. And third-party tools can programmatically onboard users to the AitherOS ecosystem.
What This Actually Changes
The thesis behind AitherOS has always been: the bottleneck in AI is not model capability. It's infrastructure.
Models can code, reason, plan, review, test, and deploy. But someone has to wire them together. Someone has to handle routing, memory, context, tool access, security, billing, and coordination. That wiring is the actual product.
Today's update removes the last barrier between "I want to use AI agents" and "I'm using AI agents." No infrastructure expertise required. No cloud account required. No GPU required (though it helps). Just a package install and an onboard command — or a bootstrap script if you want the full stack.
The agents in the fleet — Demiurge for code, Athena for security, Hydra for review, Apollo for performance, Atlas for architecture, Viviane for memory, Scribe for documentation, Saga for creative work, Lyra for research — are available locally if you're running AitherOS, or via cloud if you have an API key. The onboarding system detects which path works for you and configures accordingly.
For OpenClaw users specifically: your existing workflow doesn't change. You keep using OpenClaw. You just gain access to a fleet of specialists that you can delegate to when your work requires it. Code review? Hydra. Security audit? Athena. Refactor a module? Demiurge. Swarm an entire feature with 11 parallel agents? That too. All accessible through the same MCP interface you're already using.
The Numbers
- 151 tests covering the full onboarding system
- 9 API endpoints for programmatic access
- 3 new CLI commands (onboard, integrate, publish)
- 4 distribution channels (pip, brew, winget, npm) plus standalone binary
- 2 bootstrap scripts (bash + PowerShell) via AitherZero for full stack deployment
- 68 tool translations for OpenClaw compatibility
- 29 agents accessible through the fleet bridge
- Under 5 seconds for full OpenClaw integration
- Zero configuration files to edit manually
Try It
Just the ADK:
pip install aither-adk
aither onboard
Full AitherOS stack via AitherZero:
# Linux/macOS
curl -sSL https://raw.githubusercontent.com/Aitherium/AitherZero/main/bootstrap.sh | bash
# Windows
irm https://raw.githubusercontent.com/Aitherium/AitherZero/main/bootstrap.ps1 | iex
If you're already using OpenClaw:
aither integrate openclaw
If you've built something and want to share it:
aither publish
The infrastructure problem is solved. Go build something interesting.