Back to blog
engineeringmcpagentsdeveloper-experience

Claude Code + AitherNode MCP: Giving an IDE Full Access to an Agentic OS

March 4, 202612 min readDavid Parkhurst

Most AI coding assistants talk to your code through a file system. They read files, write files, run shell commands. It works. But it's the equivalent of operating a nuclear reactor through a terminal — you can technically do everything, but you're missing the control panel, the gauges, the safety interlocks, and the 30 years of institutional knowledge that the operators carry in their heads.

AitherOS has 85 services, 12 AI agents, a GPU scheduler, a knowledge graph, a code graph with AST-level analysis, a pain-based self-healing system, and a full orchestration layer. All of that infrastructure is useless to an external IDE unless you build a bridge. So we built one — using the Model Context Protocol (MCP).

The result: Claude Code can now operate the entire AitherOS agentic stack through a single MCP server. Not as a toy demo. As the primary development interface. Every day.

What Is MCP and Why It Matters Here

MCP (Model Context Protocol) is an open standard from Anthropic for connecting AI assistants to external tools and data sources. It works via stdio — the AI spawns a local process, talks to it over stdin/stdout, and the process exposes a catalog of typed tools the AI can call.

The key insight: MCP turns your local infrastructure into first-class tools for the AI. Instead of the AI fumbling through shell commands to interact with your system, it gets named, documented, typed operations. ask_agent("demiurge", "explain the auth flow")instead of curl -X POST localhost:8001/chat -d .... The AI understands the tool semantics and uses them intelligently.

The AitherNode MCP Server: 300+ Tools, Zero Configuration

AitherNode (apps/AitherNode/mcp_server.py) is the MCP gateway for the entire AitherOS ecosystem. It uses auto-discovery — at startup, it scans all 48 tool modules in apps/AitherNode/tools/mcp/, inspects every public function with a docstring, extracts parameter types from Python type hints, and registers them as MCP tools. No manual registration. No schemas to maintain.

Add a new function to any mcp_*.py module with a docstring and type hints — it appears as a Claude Code tool on next restart. That's it.

The 48 modules cover every layer of the OS:

CategoryModuleWhat It Does
Agent Delegationmcp_agent_delegateAsk any agent a question, get structured JSON with file paths and code refs
Code Intelligencemcp_codegraphAST-level search, caller/callee graphs, cross-domain graph queries
Forge Orchestrationmcp_forgeSpawn subagents, parallel dispatch, trace trees, worktree management
Swarm Codingmcp_swarm11-agent coding swarm: architect → 8 parallel agents → review → judge
Memorymcp_memorySemantic long-term memory with embeddings, working memory, recall
Git & GitHubmcp_gitStatus, branch, commit, push, PR creation — all from tool calls
Filesystemmcp_filesystemRead, write, copy, move, delete — with virtual path support
Infrastructuremcp_infrastructureDeploy VMs, manage Docker, HyperV templates
Securitymcp_security_opsSecurity audits, jailbreak testing, chaos campaigns
Orchestratormcp_orchestratorDirect chat with Genesis, LLM recommendations, streaming
Diagnosticsmcp_diagnosticsContext X-ray, pipeline snapshots, system fingerprinting
Trainingmcp_trainingCollect training examples, export data, trigger fine-tuning
Pipelinesmcp_pipelinesUnix-pipe-style data flow composition, YAML-declared pipelines
Vision & Voicemcp_vision, mcp_voiceImage analysis, OCR, speech synthesis, emotion detection
Image Generationmcp_generationComfyUI workflows, private image generation
Session Harvestingmcp_session_harvesterCrawl Claude/Codex/Gemini sessions for training data

And 30+ more modules covering RBAC, social posting, Neocities deployment, mesh networking, Ollama management, context synthesis, gateway routing, specs, personas, and roadmap tracking.

Setup: 60 Seconds to Full Integration

The entire setup is one script:

# From the AitherOS directory:
python scripts/setup-claude.py

This generates two files:

  1. .mcp.json — Tells Claude Code where the MCP server lives:
    {
      "mcpServers": {
        "aitheros": {
          "command": ".venv/Scripts/python.exe",
          "args": ["apps/AitherNode/mcp_server.py"]
        }
      }
    }
  2. .claude/settings.local.json — Pre-approves the MCP tools so you don't get a permission prompt for every call:
    {
      "permissions": {
        "allow": ["mcp__aitheros__*"]
      },
      "enableAllProjectMcpServers": true
    }

Restart Claude Code. The MCP server starts automatically. 300+ tools are now available. No Docker, no API keys, no cloud dependency. It runs on your local Python venv and talks to your local AitherOS services over HTTP.

What It Looks Like in Practice

Here's what changes when Claude Code has access to the full AitherOS stack:

1. Agent Delegation: Ask the Expert

Instead of Claude Code trying to understand the codebase through grep and file reads, it can delegate to Demiurge (the code expert agent) who has the full context pipeline — CodeGraph, MemoryGraph, embeddings, conversation history:

# Claude Code calls:
mcp__aitheros__ask_agent(
  question="How does the effort routing work in AgentForge?",
  agent="demiurge",
  effort=5
)

# Returns structured JSON:
{
  "answer": "AgentForge routes through UCB + EffortScaler...",
  "files": [
    {"path": "lib/orchestration/AgentForge.py", "line": 142, "relevance": "high"},
    {"path": "lib/core/EffortScaler.py", "line": 67, "relevance": "high"}
  ],
  "code_refs": [
    {"name": "_get_llm_func", "file": "AgentForge.py", "start_line": 198, "type": "method"}
  ],
  "suggestions": ["Check effort tier mapping in agent_kernel.yaml"],
  "_agent": "demiurge",
  "_model": "aither-orchestrator"
}

The agent doesn't just answer — it returns exact file paths, line numbers, and code references that Claude Code can navigate to. It's like having a senior engineer on call who knows every line of the codebase.

2. CodeGraph: AST-Level Code Intelligence

Forget regex-based code search. CodeGraph parses the entire codebase into an AST-aware index with function signatures, call graphs, and semantic embeddings:

# Claude Code calls:
mcp__aitheros__codegraph_search(
  query="circuit breaker health check",
  limit=5
)

# Returns chunks with exact locations, signatures, and relevance scores:
{
  "results": [
    {
      "id": "lib/core/CircuitBreaker.py::check_health",
      "type": "method",
      "signature": "async def check_health(self, service: str) -> HealthStatus",
      "file": "lib/core/CircuitBreaker.py",
      "line": 234,
      "score": 0.92
    }
  ]
}

Then ask for the call graph — who calls this function, and what does it call:

mcp__aitheros__codegraph_get_context(
  chunk_id="lib/core/CircuitBreaker.py::check_health"
)
# Returns: callers, callees, import chain, containing class

Claude Code now understands blast radius before making changes. Not through heuristics — through actual static analysis.

3. Forge: Autonomous Agent Dispatch

For complex tasks, Claude Code can spawn autonomous subagents that have their own ReAct loops, tool access, and identity:

mcp__aitheros__forge_subagent(
  task="Audit all FastAPI routes for missing auth middleware",
  agent_type="mr_robot",  // Security specialist
  effort=7,
  max_turns=15
)
# Mr. Robot runs autonomously: reads code, checks patterns,
# queries CodeGraph, writes a structured security report

Or dispatch multiple agents in parallel:

mcp__aitheros__forge_parallel(
  tasks=["Review auth module", "Check rate limiting", "Audit CORS config"],
  agent_type="auto",  // Atlas routes each to the best agent
  effort=5
)

4. Swarm Coding: 11 Agents on One Problem

For big features, invoke the full coding swarm:

mcp__aitheros__swarm_run_and_deliver(
  task="Implement a WebSocket-based live log viewer",
  language="python",
  effort=7
)
# Phase 1: Architect designs the solution
# Phase 2: 8 agents in parallel (3 coders, 2 testers, 2 security, 1 scribe)
# Phase 3: Reviewer consolidates
# Phase 4: Judge scores quality
# Phase 5: Sandbox tests the code
# Phase 6: Delivers results to inbox

Claude Code just dispatched 11 AI agents that each have their own LLM context, tools, and expertise domain. The output is tested, reviewed, and scored — all from a single MCP tool call.

5. System Awareness

Claude Code isn't blind to the infrastructure:

mcp__aitheros__get_system_status()
// Returns: running services, GPU state, health scores, active agents, pain levels

mcp__aitheros__get_system_snapshot(save_to_memory=true)
// Full system state saved to knowledge graph for future reference

When Claude Code suggests deploying a change, it already knows which services are running, which are degraded, and whether the GPU has headroom.

The Feedback Loop: Every Tool Call Teaches the System

This is the part that matters most and what makes MCP different from just wrapping shell commands.

Every MCP tool call flows through AitherOS infrastructure. That means:

  • Strata ingestion — Every tool call is logged and indexed for analytics
  • Memory graph — Agent responses are stored as episodic memories that improve future context
  • Training data harvesting — Real tool-call patterns feed ToolGraph's NanoGPT predictor
  • FluxEmitter events — Tool calls emit typed events that trigger downstream reactions
  • Pain system — Failed calls contribute to service health scores and trigger self-healing

When Claude Code calls ask_agent, the question flows through the full 12-stage ContextPipeline. That means CodeGraph context, MemoryGraph memories, live FluxEmitter state, and OS baseline anomaly scores are all injected into the agent's system prompt before it answers. The agent isn't just reading files — it's reasoning with the full cognitive apparatus of the OS.

Built-in tools bypass all of that. A grep finds text. An MCP tool call that reaches Demiurge finds text, understands its call graph, checks what changed recently, and remembers what the user asked about it last week.

The Auto-Discovery Engine

The MCP server's auto-discovery deserves its own section because it's what makes this maintainable at scale. Here's the registration logic:

# mcp_server.py — auto-discovery loop (simplified)
for module in mcp_package.__all__:  # 48 modules
    for func_name, func in inspect.getmembers(module, inspect.isfunction):
        if func_name.startswith("_"):
            continue  # Skip private functions
        if not func.__doc__:
            continue  # Must have a docstring

        # Extract params from type hints
        sig = inspect.signature(func)
        params = {}
        for param_name, param in sig.parameters.items():
            params[param_name] = infer_json_type(param.annotation)

        TOOLS[func_name] = {
            "fn": func,
            "description": func.__doc__.strip().split("\n")[0],
            "params": params,
        }

Convention over configuration. Any public function with a docstring in any mcp_*.py file becomes a tool. The docstring's first line becomes the tool description. Type hints become the parameter schema. No decorators, no registration calls, no YAML schemas.

To add a new tool to Claude Code's toolkit, a developer writes a normal Python function:

async def check_vram_usage() -> str:
    """Get current GPU VRAM usage across all loaded models."""
    # ... implementation ...

Restart the MCP server. Claude Code can now call mcp__aitheros__check_vram_usage().

Routing: When to Use MCP vs. Built-In

We established hard routing rules in our CLAUDE.md instructions to ensure Claude Code uses MCP tools by default:

TaskUse MCPNot Built-In
Search codeexplore_code, codegraph_searchGrep, Glob
Understand architectureask_agentTask(Explore)
Find callers/calleescodegraph_get_contextGrep for function name
Complex dev tasksforge_subagentTask(demiurge)
Remember thingsremember, recallWriting .md files
Shell commandsrun_terminal_commandBash
Git operationsgit_status, git_diffBash(git ...)
System statusget_system_statusBash(curl ...)

Built-in tools are allowed as fallbacks when Genesis isn't running or for trivial single-line edits. But the default path is always through MCP — because that's where the learning loop lives.

What This Changes

Before MCP integration, Claude Code was a very smart text editor. It could read and write code, run tests, and use git. Impressive, but fundamentally limited to file-system-level operations.

After MCP integration, Claude Code is an AitherOS operator. It can:

  • Delegate research to specialized agents who have deep domain knowledge
  • Query a code graph with AST-level precision instead of regex-based search
  • Spawn autonomous subagents for complex multi-step investigations
  • Launch 11-agent coding swarms for large features
  • Store and recall memories across sessions
  • Monitor system health, GPU state, and service availability
  • Deploy infrastructure, manage security, and trigger chaos testing
  • Generate images, analyze voice, and process video through perception agents
  • Publish to Neocities, post to social platforms, and manage roadmap items

The gap between “AI coding assistant” and “AI development platform” is exactly the gap between talking to files and talking to an operating system. MCP bridges it.

The Numbers

MetricValue
MCP tool modules48
Total tools exposed300+
Setup time<60 seconds
Config files generated2 (.mcp.json + settings.local.json)
Lines of MCP server code~200 (auto-discovery handles the rest)
External dependencies0 (no cloud, no API keys)
Agent identities available12+ (Demiurge, Atlas, Aeon, Mr. Robot, etc.)
Swarm agent count11 (architect + 8 workers + reviewer + judge)
Passing tests1,246+

Try It Yourself

If you have a Python-based service ecosystem, you can build the same thing. The pattern is:

  1. Write Python functions with docstrings and type hints
  2. Have those functions call your services over HTTP
  3. Auto-discover them with inspect at MCP server startup
  4. Point Claude Code at the MCP server via .mcp.json
  5. Write routing rules in CLAUDE.md so the AI prefers your tools

The MCP protocol is open. The auto-discovery pattern is general. What makes AitherOS unique isn't the bridge — it's what's on the other side: 85 services, a full cognitive pipeline, and agents that actually know your code.

Claude Code was already the best AI coding assistant. Now it has an operating system behind it.