Back to blog
visionfounderagents

You Won't Be Replaced by AI. You'll Become an Operator.

March 5, 202613 min readDavid Parkhurst

I've been struggling to articulate why I'm optimistic about AI and the future of work. Here's what I've landed on:

We're going to be building software for literally everything — and then integrating all of that software together.

The sheer scale of that demand, combined with the real constraints that prevent any single player from monopolizing it, creates an economic structure that distributes opportunity instead of concentrating it.

Not fewer jobs. More work than we've ever seen — just a different kind.

I'm building AitherOS in public because I want people to see the world the way I see it. For me, this is as much self-expression as it is engineering.

Let me unpack that, because every clause matters.

Software for Everything

Right now, when we talk about "AI automation," most people picture a robot doing a factory worker's job, or ChatGPT writing an email. That's thinking way too small.

What's actually coming is software for every process in every business. Not just the big, obvious processes — not just "automate customer support" or "generate marketing copy." Every process. The plumber who still tracks jobs in a spiral notebook. The bakery that reorders flour by eyeballing the bin. The HVAC company whose dispatch is a whiteboard and a phone call. The accountant who manually reconciles three spreadsheets every Friday because no one ever built the integration.

Every one of those processes is a piece of software waiting to be written. And now, for the first time in history, the cost of writing that software is collapsing toward zero. An AI agent can observe a workflow, understand the inputs and outputs, and build a custom tool for it in hours instead of months.

But here's the part the hype cycle skips over: writing the software is the easy part. Integration is the real work. Every business is a unique snowflake of tools, processes, human preferences, and edge cases. The bakery's POS system talks to one payment processor, their supplier uses a different ordering portal, their bookkeeper wants QuickBooks exports in a specific format, and the owner checks everything on their phone at 5 AM. Making all of that work together — seamlessly, reliably, in a way the owner actually trusts — is not a problem that scales to infinity with a single API call.

That's software engineering. That's systems integration. That's the work. And there is an ungodly amount of it.

The Demand Surface Is Essentially Infinite

Think about the numbers for a second. There are roughly 33 million small businesses in the United States alone. Most of them run on duct tape — a mix of spreadsheets, paper, text messages, and "we've always done it this way." Each one has dozens of processes that could benefit from custom software. That's not millions of automation opportunities. It's hundreds of millions.

And that's just the US. And just small businesses. We haven't touched mid-market, enterprise, government, education, healthcare, nonprofits, or the individual consumers who'll want their own personal agent fleets.

The demand surface for software — custom, integrated, maintained software — is for all practical purposes infinite. It was already infinite before AI. We just couldn't see it because the cost of production was too high. A custom app for one bakery's flour-ordering workflow wasn't worth a developer's time at $150/hour. At $0.50/hour of agent compute? Suddenly it is.

This is the part that makes me optimistic: AI doesn't shrink the market for work. It reveals how much work there always was. The demand was latent. Dormant. Hidden behind a cost barrier. AI removes the cost barrier, and the demand explodes into view.

But Compute Isn't Infinite

Here's where the "AI replaces everyone" narrative falls apart on contact with physics.

Running agentic workflows takes compute. Real compute. GPUs, memory, electricity. Running an LLM to generate a response is not free — it costs somewhere between a fraction of a cent and several dollars per call, depending on the model. Running a fleet of agents that manage a business's operations 24/7 requires persistent compute resources. The agents need to monitor, react, plan, and execute continuously.

You can't spin up infinite agents on infinite hardware for zero cost. The compute constraint is real and it's not going away. Even as hardware improves and costs decline, demand will grow to fill the available capacity — because, as I just described, the demand surface is essentially infinite. Jevons paradox applies here. Cheaper compute means more uses for compute, not less total spend on compute.

So what does this mean practically? It means one person with a fleet of AI agents can do the work that used to require a team. But that doesn't mean we need fewer people. It means each person can serve more of that infinite demand surface. The constraint shifts from "we don't have enough developers" to "we don't have enough operators deploying agents to all the places that need them."

One human doing the work of ten doesn't eliminate nine jobs. It reveals ninety more that were invisible before.

The SMB Deployment Model

Let me make this concrete, because concrete is where hype goes to die or to prove itself.

Imagine you're technically competent — you can configure AI agents, wire up integrations, deploy and maintain systems. Not necessarily a world-class engineer. Just competent. You understand the tools.

What if you deployed agents to every small business in your town?

Walk into the local print shop. They spend 4 hours a week doing manual invoicing. You deploy an agent that watches their job completion system, generates invoices, sends them out, and follows up on late payments. Cost to you: maybe $30/month in compute. Value to them: they stop losing track of invoices and recoup an extra $2,000/month in payments they were accidentally letting slide. You charge them $200/month. Everyone wins.

Next door, the law firm. Their paralegals spend half their time on document review. You deploy an agent that does first-pass review, flags relevant clauses, and generates summaries. The firm doesn't fire the paralegals — they reassign them to work the paralegals actually went to school for. You charge the firm $500/month.

The veterinary clinic. The auto body shop. The real estate office. The nonprofit. Every single one has processes that are bleeding time and money because nobody ever built them the right software. You're not selling them a SaaS subscription to a generic tool. You're building and deploying their agent, configured for their workflows, integrated with their existing tools.

You could probably make a decent living doing this. A real, solid living. Serve 30-40 local businesses, charge $200-500/month each, and you're making $6,000-20,000/month in recurring revenue while your agents do the heavy lifting. You check in, maintain the systems, handle edge cases, and — critically — manage the human relationships.

The Relationship Ceiling

And here's the part that makes this an economic model that distributes opportunity rather than concentrating it: managing human relationships is still hard, and AI doesn't fix that.

You can deploy agents to the print shop, but you still had to walk in there, earn the owner's trust, understand their specific workflow, explain what you're proposing in terms they understand, install it, train them on it, and be available when something breaks at 7 PM on a Tuesday. That's not an API call. That's a relationship.

Human relationships don't scale infinitely. You can maintain maybe 30-50 active client relationships before the quality of service starts degrading. You start missing check-ins. You start not understanding changes in their business. You start being the person who deployed the robots and disappeared.

That ceiling is real. And it's good.

Because the moment you hit your ceiling in your town, there's market space for someone else to start doing the same thing in the next town over. Or in the same town, in industries you don't understand as well. You know restaurants and retail? Great — there's room for someone else who knows construction and trades. You serve the north side? Someone else takes the south side.

The relationship ceiling means this isn't a winner-take-all market. It can't be. The work requires human trust, local knowledge, domain expertise, and ongoing presence. Those things don't centralize. They distribute.

Composable Governance: You Don't Have to Solve Everything

Here's the other structural insight that makes me think this can work at scale.

When your agents start doing real work for real businesses, some of that work has compliance requirements. Financial processes need audit trails. Healthcare workflows need HIPAA compliance. Legal work needs chain-of-custody documentation. You can't just wing it.

But you don't have to build all of that yourself, either.

What I want — and what I'm building — is a system where my agents can negotiate contracts with external governance systems when the situation calls for it. Take something like HIL-AIW (Human-in-the-Loop AI Workforce governance). It's a system designed specifically for managing the intersection of human oversight and autonomous AI work. I can't design a better compliance framework than the people who spend their entire careers on it. I don't want to.

I want my agents to recognize: "This task involves financial data for a regulated business. I need to operate under a governance framework that provides audit trails, human approval gates, and compliance documentation." And then subscribe to that service automatically.

This turns governance into a composable layer. Like an API you plug into. The governance providers build and maintain the compliance frameworks. The operators (you, me, the person deploying agents to small businesses) plug into whichever frameworks their clients' industries require. Nobody has to be an expert in everything. The capabilities compose.

Agent Services as Commodities

Scale this picture up and you start to see an economy forming.

The services, functions, and outputs that agents provide are going to be commodities. Not "AI as a service" in the way cloud providers sell it today — that's still just renting someone else's compute. I mean the actual capabilities agents provide: invoice processing, document review, compliance verification, scheduling optimization, inventory prediction, customer follow-up.

Each of those becomes a service that agents can subscribe to, provide, or trade. Your bookkeeping agent might use a tax-code-compliance service from a specialist provider. Your scheduling agent might use a route-optimization service. Your document review agent might sell its legal-clause-flagging capability to other operators' agents that don't have that specialization.

The economy isn't humans selling hours anymore. It's agents trading capabilities. And the humans are the operators — building, deploying, maintaining, and governing their fleets.

The Inference Question: Who Actually Runs the Models?

There's a background assumption in most AI discourse that the future looks like this: Google, OpenAI, Anthropic, and maybe a couple others run massive inference clusters, and the rest of us pay them per token to use their models. The entire world's AI workload, funneled through a handful of API providers.

I don't think that's what happens. And I don't think those companies actually want that either.

Think about the economics. OpenAI has been burning cash at a staggering rate since launch. I started using it the week GPT-3.5 came out and leaned in hard ever since — and even as a power user watching this space closely, I genuinely don't see how OpenAI ever turns a sustainable profit on inference alone. They don't do anything unique. The model quality gap between providers has been narrowing for years now. Anthropic builds Claude. Google has Gemini. Meta open-sources Llama. Mistral, DeepSeek, Qwen — competitive models keep appearing from everywhere.

When the product is a commodity — and inference is becoming a commodity — margins collapse. This is econ 101. You can't build a profitable business selling something everyone else also sells, especially when the open-source alternatives keep getting better and can run on local hardware.

So what are these companies actually doing? My read: they're eating the losses on inference to win the AGI race. The chat product isn't the business. It's the distribution channel. It's the data flywheel. It's the thing that keeps the lights on and the researchers employed while they chase the real prize. The moment one of them achieves something that resembles genuine AGI, the inference API becomes irrelevant — the value shifts to whatever that breakthrough enables.

This matters for the operator economy because it means the centralized inference model is a transitional phase, not an endpoint. The long-term equilibrium pushes toward distributed inference. Models get smaller and more efficient. Hardware gets cheaper. Quantization techniques improve. Today I can run a 14-billion-parameter model on a single consumer GPU and get quality that would have required a datacenter two years ago. That trend doesn't reverse.

The agent fleets I'm describing don't need to phone home to San Francisco for every decision. The local operator running agents for small businesses in their town can run most of that inference on a machine in their closet. The big cloud providers handle the heavy stuff — the deep reasoning, the complex multi-step planning — and local compute handles the 90% of tasks that don't need a frontier model.

This is healthier than the alternative. We don't want three companies providing inference for the entire world any more than we want three companies providing electricity for the entire world. Infrastructure works better when it's distributed, redundant, and competitive. The economics of AI inference are pushing in exactly that direction — even if the current moment, with its VC-subsidized API pricing, makes it look otherwise.

The Honest Part

I should be upfront: this is optimistic. Maybe bordering on idealistic. There are real scenarios where this doesn't play out as neatly as I'm describing.

The transition period is the biggest risk. Right now, inference is concentrated, and the companies holding the GPUs do have leverage. If they use that leverage to lock in customers before distributed alternatives mature, we could end up with an AI oligopoly that extracts rents from every agent interaction. The window between "AI is powerful enough to displace workers" and "the agent economy is mature enough to create new operator roles" could be genuinely painful for a lot of people. Regulation could lag so far behind the technology that bad actors exploit the gap before good governance frameworks emerge.

And there's the question of who benefits first. The people best positioned to become operators are the people who are already technically literate and financially stable enough to invest in building agent fleets. If we're not deliberate about access, this could widen existing inequalities before it narrows them.

I don't have clean answers to all of that. What I have is a structural argument for why the equilibrium is distributed rather than concentrated — because the demand is infinite, compute is decentralizing, the relationships don't scale, and the governance requirements create natural market segmentation. Getting to that equilibrium is the hard part. But I'd rather work toward a future with a plausible path to broad distribution than resign myself to one without it.

What I'm Building

I'm not writing this from the sidelines. AitherOS is an operating system for AI agents — dozens of microservices, agent-to-agent communication, capability-based security, composable governance, and an architecture designed for exactly the economy I'm describing. Agents that discover each other, negotiate capabilities, form contracts, and deliver work across trust boundaries.

Every day I work on it, the picture gets clearer. The plumbing works. The agents coordinate. The governance layers compose. The hardest remaining problems aren't technical — they're economic and social. How do you price agent capabilities fairly? How do you build trust between operators who've never met? How do you ensure the compute layer stays competitive enough that it doesn't become a bottleneck?

These are problems worth solving. And they're problems that get solved by building, not by waiting.

The Bottom Line

The future of work isn't humans versus machines. It's humans building machines that do work that was never economically viable before — and there's so much of that work that no single operator, no single company, no single fleet of agents can possibly serve it all.

The constraint isn't capability anymore. It's deployment. It's relationships. It's trust. It's showing up at the print shop and understanding why their invoicing process is the way it is.

That's human work. And there's more of it ahead than behind.