← Back to blog

If AI Is Electricity, Agents Are Engines

Andrew Ng's electricity metaphor still holds. The next phase is execution at scale: agent engines can drive major abundance, with real upside for broad access if costs keep collapsing.

  • AI Philosophy
  • Agents
  • Future of Work
  • Mechanize
  • Abundance
If AI Is Electricity, Agents Are Engines

Andrew Ng’s line that “AI is the new electricity” is still one of the best models for this moment.

Electricity is latent capability. It only becomes real when routed through machines that do useful work. In the same way, foundation models are potential. Agents are the engines that convert that potential into execution.

That shift from capability to execution is where labor, ownership, and politics become concrete.

Quick definitions (plain English)

  • Foundation model: a large general-purpose AI model that can be adapted to many tasks.
  • Agent: an AI workflow that plans and executes multi-step tasks.
  • Control loop: repeated cycle of act, check, correct, and continue.
  • Power topology: who controls the key layers (compute, models, data, distribution, policy).
  • Techno-anarchism (my usage): prefer distributed control and user agency over concentrated gatekeeping.

From model demos to working engines

The market is no longer asking whether LLMs can answer questions. It is asking whether agent systems can produce reliable outcomes:

  • resolve support cases,
  • execute account workflows,
  • coordinate tooling,
  • draft and revise artifacts with low supervision.

When these systems work, they behave like composable engines. Prompts are no longer the product. Control loops are the product.

Mechanize and the real disagreement

The mechanize.work controversy exposed a deeper split than “pro-AI vs anti-AI.”

The real disagreement is about power topology.

Do we build a future where labor automation is controlled by a small ownership class and mediated by closed platforms? Or do we build a future where small teams and individuals can run their own high-leverage automation stacks with real exit rights?

My stance is closer to the second: less centralized command, more distributed capacity.

I do not want a bureaucratic utopia where one institution plans everyone’s work. I also do not want a corporate monoculture where five companies mediate all cognition.

I want a high-agency ecosystem:

  1. open protocols where possible,
  2. competitive model and infra layers,
  3. portable identity/data/workflows,
  4. many viable ways to earn, build, and coordinate.

Call it pragmatic techno-anarchism if you want. The principle is simple: automation should expand freedom, not narrow it.

Labor displacement is real, but so is leverage expansion

Yes, serious displacement is coming for both physical and cognitive work.

But there is a second truth: agent systems dramatically increase what one person or one small team can build. A founder with strong product judgment and technical literacy can now operate with the throughput of a much larger org from just a few years ago.

That is not theoretical. It is already visible in teams shipping with thinner staffing and tighter loops.

The question is not whether labor markets will change. They already are.

The question is whether the new productivity dividend lands in many hands or in very few.

Horse analogy, with source

The horse-to-car analogy in automation talk is not new. A concise version that maps well to this AI moment is Andy Jones’s essay, Horses, which frames how a new machine regime can displace even very strong workers from the previous regime.

I reference it because it is useful, not because it is destiny. The lesson is not “humans are obsolete.” The lesson is “capability transitions can outpace institutional adaptation,” so governance and distribution design cannot be treated as optional.

The new literacy requirement

In this environment, “AI literacy” is too vague.

The practical literacy is:

  • scope clarity,
  • implementation literacy,
  • systems thinking,
  • evaluation discipline.

If you cannot define the task, you cannot supervise the agent. If you cannot read the system, you cannot trust the output.

This is why the PM + engineering bridge matters so much right now. Design and strategy still matter, but strategy without implementation understanding is brittle under agentic execution.

Abundance pressure may be stronger than we think

People often frame this as if distribution only happens through top-down intervention.

I think that misses a second force: when capability gets cheap enough, access pressure can emerge on its own.

We have already seen this pattern in AI:

  • model quality diffuses,
  • open alternatives improve,
  • prices compress,
  • and yesterday’s premium capability becomes today’s baseline utility.

This is the part of the mechanize viewpoint I am sympathetic to: if the engines are real, run them hard. The productivity upside is too large to ignore.

Massive surplus does not automatically create perfect outcomes, but it can materially widen the feasible set for ordinary people. More individuals and small teams can afford tools that were previously enterprise-only.

That possibility matters. Abundance can be a distribution mechanism in practice, not just a slogan.

What I believe

Agents are not mystical entities. They are engineered systems trained on human-produced corpora, orchestrated by human-authored code, and deployed into human economies.

I am optimistic that the long-run direction is higher abundance, lower cognitive labor cost, and broader practical capability for more people.

I do not think that outcome is guaranteed, but I do think it is plausible enough to build for it directly: ship useful systems, lower cost, increase leverage, and let compounding productivity create new room for participation.

This is why I stay optimistic while still taking power dynamics seriously. The upside is real, and it is worth aiming at.

Related notes:


Best,
Oli
March 2, 2026