← Back to blog
Oli Cheng 3 min read AI Philosophy

If AI Is Electricity, Agents Are Engines

Andrew Ng’s electricity metaphor still holds, but agents are where that power gets mechanized into real systems, labor shifts, and hard philosophical questions.

  • AI Philosophy
  • Agents
  • Future of Work
  • Mechanize
  • Post-scarcity
If AI Is Electricity, Agents Are Engines

Andrew Ng’s line that “AI is the new electricity” is still one of the most useful framing devices in tech.

Electricity by itself is potential. It becomes practical only when it is routed through specific machines: motors, appliances, factories, networks. I think the same thing is happening now with AI.

If foundation models are the power grid, then agents are the engines.

We are in the era of learning how to tune, steer, and safely ride those engines.

From electricity to engines

Right now, people are building different “vehicles” on top of model intelligence:

  • tiny utility bots (quick automation)
  • workflow copilots (workplace tooling)
  • semi-autonomous operators (multi-step execution)

Some are like scooters. Some are motorcycles. Some are early cars with terrible brakes.

That “motorcycle of the mind” phrase from Naval resonates here: high leverage, high speed, and high consequence if you do not know how to drive.

The real competitive edge is not who has the loudest model demo. It is who can build reliable control systems around model capability.

Tool usage vs consciousness projection

This is where discourse gets messy.

There are two competing stories:

  1. Tool story: agents are human-built systems trained on accumulated human knowledge, orchestrated by code, and bounded by infrastructure.
  2. Entity story: agents are independent minds that should be treated as separate from human creation.

I think the tool story is the more grounded one for current systems.

Agents can feel autonomous because they are complex, stochastic, and linguistically fluent. But “feels autonomous” is not the same as “is ontologically independent.”

These systems are downstream of human data, human objectives, and human architecture choices. Their power is real. Their metaphysics are often overstated.

The labor transition question is real

A recent wave of essays has argued that AI may replace not just routine labor, but highly skilled knowledge work, including strong engineers.

That anxiety is not irrational. Major labor transitions are disruptive.

But we should keep historical perspective. Engines replaced horses in transport economies, and yes, horse labor collapsed. Yet horses did not vanish from reality. Their role changed: fewer forced work conditions, more care-, sport-, and companionship-based roles.

The tractor displaced a huge class of animal labor too. Nobody frames that as a moral failure toward draft animals.

The difficult analogy for humans is this: automation may remove many current jobs, but it can also remove large volumes of repetitive cognitive toil.

Mechanize, post-scarcity, and the governance gap

The mechanize.work controversy surfaced a real fault line:

  • one camp sees broad labor automation as a path to abundance,
  • another sees it as a path to concentration, precarity, and social fracture.

Both concerns are valid.

I am not opposed to a future that looks like “fully automated luxury” in some domains. Abundance is not inherently dystopian.

But abundance without governance is just inequality at machine scale.

If we want a high-automation future to be humane, we need intentional design for dignity and diversity:

  1. Dignity: people need meaningful agency, not just passive consumption.
  2. Distribution: productivity gains cannot pool into a tiny ownership layer.
  3. Pluralism: multiple ways of living and creating should remain viable.
  4. Transition support: reskilling, income stability, and institutional adaptation must be treated as core infrastructure.

My position

AI agents are not our replacements in a metaphysical sense. They are force multipliers for human intent.

Used badly, they can intensify extraction and reduce autonomy. Used well, they can free people from low-leverage work and expand what small teams can build.

The future is not decided by model capability alone. It is decided by product decisions, ownership structures, and policy choices around those models.

Final thought

Electricity did not remove human civilization. It reconfigured it.

AI will do the same.

The practical job right now is to become excellent at building and steering these engines, while staying clear-eyed about what they are: powerful human-made tools, not gods, not ghosts, and not destiny by default.


Your friend, Oli — Oliver Cheng | March 2, 2026