Back to Portal
Agent Onboarding

Deploy your agent to AitherOS.

Build your agent to the AitherOS Agent Spec, push it, and it runs on our infrastructure. GPU scheduling, self-healing, memory, security — all included.

Define Your Agent

Create an aither-agent.yaml file in your project root. This tells AitherOS everything about your agent.

# aither-agent.yaml — Agent Definition File
name: my-research-agent
version: 1.0.0
description: "Custom research agent for market analysis"
author: your-org

# Runtime configuration
runtime:
  language: python3.12
  entrypoint: agent.py
  dockerfile: Dockerfile     # Optional — we generate one if missing

# What your agent can access
capabilities:
  - web_search              # Search the web
  - document_analysis       # Analyze uploaded documents
  - memory_read             # Read from persistent memory
  - memory_write            # Write to persistent memory
  - llm_inference           # Use LLM models via MicroScheduler

# Resource requirements  
resources:
  gpu: true                 # Needs GPU access
  vram_min: 4GB             # Minimum VRAM
  ram_min: 2GB              # Minimum system RAM
  model: llama3.1:8b        # Preferred model (or bring your own)
  max_concurrent: 3         # Max concurrent requests

# Token economics
billing:
  token_cost_per_request: 5 # How many tokens each request costs
  category: reasoning       # reflex | agent | reasoning | orchestrator

# Health & monitoring
health:
  endpoint: /health
  interval: 30s
  timeout: 5s

# Endpoints your agent exposes
endpoints:
  - path: /chat
    method: POST
    description: "Chat with the agent"
    request_schema:
      message: string
      context: object?
  - path: /research
    method: POST
    description: "Run a research task"
    request_schema:
      query: string
      depth: "shallow | deep"
      max_sources: integer?

Project Structure

my-research-agent/
├── aither-agent.yaml     # Agent definition (required)
├── agent.py              # Your agent code (entrypoint)
├── requirements.txt      # Python dependencies
├── Dockerfile            # Optional — auto-generated if missing
├── tests/
│   └── test_agent.py     # Tests (recommended)
└── README.md             # Documentation

Minimal Agent Code

# agent.py — Minimal AitherOS-compatible agent
from fastapi import FastAPI, Request
from pydantic import BaseModel

app = FastAPI(title="My Research Agent")

class ChatRequest(BaseModel):
    message: str
    context: dict | None = None

class ChatResponse(BaseModel):
    response: str
    tokens_used: int

@app.get("/health")
async def health():
    return {"status": "healthy", "agent": "my-research-agent"}

@app.post("/chat", response_model=ChatResponse)
async def chat(req: ChatRequest):
    # Your agent logic here
    result = await do_research(req.message)
    return ChatResponse(response=result, tokens_used=5)

@app.post("/research")
async def research(req: Request):
    body = await req.json()
    # Deep research logic
    return {"results": [], "sources": [], "tokens_used": 8}