Oliver 'Oli' Cheng
← Back to blog
Oli Cheng 2 min read Behavior

The Steering Problem: AI, Tech, and Better Human Behavior

How I think about behavior-shaping products without manipulation theater: agency, constraints, feedback loops, and measurable improvement.

  • Behavior Design
  • AI Product
  • Bonzen
  • Product Strategy
  • Ethics
The Steering Problem: AI, Tech, and Better Human Behavior

Most software already shapes behavior. The question is not whether we steer people. The question is whether we do it transparently, responsibly, and with user agency intact.

I call this the steering problem.

What I mean by steering

Steering is any product decision that changes what users notice, choose, repeat, or avoid.

That includes:

  • what gets surfaced first,
  • how friction is distributed,
  • how feedback is timed,
  • and what “success” looks like in the UI.

In AI products, steering becomes stronger because the interface can adapt language and suggestions per user state.

Good steering vs bad steering

ModeSignatureResult
Good steeringexplicit intent, user control, measurable benefitmore agency over time
Bad steeringhidden nudges, compulsive loops, vague outcomesdependency and distrust

If users cannot explain what your system is trying to optimize in their behavior, your steering model is probably too opaque.

My practical framework

I design behavior systems around four checks:

  1. Direction: what behavior are we trying to increase or decrease?
  2. Consent: does the user understand and choose this loop?
  3. Feedback: is the signal immediate enough to learn from?
  4. Exit: can users override, pause, or reject the loop?

If one of these fails, the system is likely manipulative or noisy.

Bonzen and Divine Machine sit on one spectrum

I treat these as related but different projects:

  • Bonzen is behavior tech: check-in, guided reset, reflection, next action.
  • Divine Machine is art/critique: a ritual interface that makes deterministic inference feel oracular.

Both ask the same core question: how do interfaces shape human interpretation and action?

Bonzen answers with practical daily behavior loops. Divine Machine answers with a philosophical mirror.

Product principle: steer toward self-authorship

The goal is not “perfect behavior.” The goal is better self-authorship.

That means helping users:

  • notice patterns faster,
  • recover from bad loops sooner,
  • make one better next decision.

I distrust products that claim total transformation. I trust products that improve next-step quality repeatedly.

How I evaluate behavior products

I look at second-order effects, not just engagement:

  • Does user clarity increase over repeated use?
  • Does time-to-recovery improve after setbacks?
  • Do users become less dependent on the app over time?
  • Can they transfer learned behavior off-platform?

If the answer to the last two is no, you may be building reliance, not growth.

Final take

AI can be used to exploit attention or to improve agency. Both are technically possible.

The difference is product intent, interaction design, and accountability.

That is the work I care about: building systems that help people steer themselves better.