Everyone Can Build Anything (And That's The Problem)
Everyone Can Build Anything (And That's The Problem)
A few weeks ago I caught a glimpse of OpenClaw's dashboard UI and felt something uncomfortable settle in my stomach. The stone palette, the dark sidebar, the minimal borders — it looked like something I'd seen before. Something I'd been building.
I asked GPT about it directly. The answer was honest and kind of brutal: convergent design. Feed an AI the same aesthetic parameters and the same SaaS conventions, you get the same output. Every time. Not because anyone's copying anyone — just because everyone's solving the same problem with the same tools and landing in the same aesthetic neighborhood.
That's when the more uncomfortable question surfaced: if the UI is converging, what else is?
Some context on who's asking. I own a dog training business. I'm also a developer. I cycle between the two — sometimes for months at a stretch — which means every time I come back to engineering, I'm three to six months behind on a landscape that doesn't wait. This past stretch, I came back to find that AI had quietly redefined what "building something" even means.
The pitch hit hard. Agentic AI can actually keep pace with the workflow of building tools to build tools. For the first time I could stay ahead of my own ideas instead of watching them pile up in a graveyard of half-started repos. The OCD, detail-obsessed perfectionism that used to drag me into the weeds before anything shipped? Suddenly a feature, not a bug. I could just... build.
So naturally, my first thought was: I'm going to build an army.
Then a business. Then a startup. Then I talked myself down to a small team of highly specialized agents — because less is more, right? That instinct was shaped by agentic-team: a self-curating workflow built around a Prime Directive that prevents skill sprawl, enforces context hygiene, and makes the toolchain improve itself rather than silently degrade. It's the genotype to AgentHQ's phenotype, if you want to get biological about it. (Still better than the Pokémon evolution analogy I almost went with.)
But once you're building agents, you start asking: what are the actual ceilings? The answer isn't "better agents" — it's context. Context is the water they swim in. Get it wrong and capability doesn't matter.
So what's the first thing to build? A skill-builder and a context-compacter — because the wall I kept hitting wasn't capability, it was token usage. Who knew too much context could be a problem? Here I am, dropping out of prompt engineering university to go get my context engineering degree.
One thing led to another. A dashboard to track context and agents across a project is useful. Agents inside that dashboard that audit the workflow, build new agents, and surface what's actually happening across a complex project over time — that's better. That's AgentHQ.
Which brought me back to the question I'd been dodging: how is this different from OpenClaw — that security nightmare I was too principled (or too proud) to even download?
It's not LangChain either. LangChain wires agent pipelines in code. AgentHQ is an ops and context engineering layer for managing, observing, and optimizing agents you're already running. Closer to LangSmith or Helicone — except neither has the directive hierarchy or the feedback loop angle. Maybe I'm not rebuilding something that already exists. Maybe.
Here's where I had to put on a different hat.
The builder was having a small crisis. The engineer stepped back and asked what was actually happening.
A few honest conclusions:
The homogenization problem is real. Everyone's using the same tools, the same prompts, the same SaaS conventions. The barrier to entry collapsed — which meant the signal-to-noise ratio collapsed with it. "Everyone can build anything now" turned out to also mean "everyone is building the same thing."
The absorption fear is legitimate but narrower than it feels. Anthropic, OpenAI, Google will commoditize the obvious stuff — token dashboards, cost visibility, generic workflow tooling. What they won't build is an opinionated personal ops layer that accumulates your context, your directive history, your feedback loops over time. That lives with you, not with them. Genuinely hard to absorb.
The meta-layers are shifting fast. Right now it's context engineering — optimizing what the model sees. After that: intent engineering, spec engineering, eventually something like outcome engineering, where you stop specifying tasks and just declare the world state you want. The tension I keep returning to: more abstraction means more delegation, which means less control over exact outcomes. No clean answer. Feels kind of moot anyway — AGI will probably get here before I figure it out.
For now: still building. Not because I've resolved any of this — I haven't — but because I started for the right reason. I wanted tools I actually want to use. That's still true.
Post one of what I expect will be an irregular, honest series.