Oliver 'Oli' Cheng
← Back to blog
Oli Cheng 1 min read Design

Design Philosophy for AI Interfaces

A practical design stance: less UI noise, more user confidence, and better decisions under uncertainty.

  • Design
  • AI UX
  • Product
Design Philosophy for AI Interfaces

Most AI UX fails for one reason: we ask users to trust invisible decisions.

My design philosophy is straightforward:

  • make intent visible
  • make uncertainty legible
  • make next steps obvious

Reduce cognitive tax

If users need to decode your interface before doing their task, you lost.

I design for:

  • clear starting points
  • constrained choice where helpful
  • progressive disclosure for complexity

“Powerful” UX is often just compact confusion.

Show work when it matters

Not every output needs chain-of-thought style exposition. But users need enough to answer:

  • Why did I get this?
  • What assumptions were made?
  • What can I do next?

That’s trust UX.

Constrain before you personalize

Teams rush to personalization early. Usually that’s backwards.

First build:

  1. stable core workflow
  2. high-confidence baseline behavior
  3. clear failure handling

Then add personalization with measured blast radius.

Design for refusal and ambiguity

AI products need intentional states for:

  • missing context
  • low confidence
  • policy conflicts
  • ambiguous user goals

If your only state is “answer,” quality will drift.

The visual layer should reinforce the interaction model

I care less about trendy components and more about behavioral semantics:

  • confirmation moments
  • confidence signals
  • reversible actions
  • clear ownership of final decisions

AI UX should feel calm, not performative.

A useful litmus test

After a session, can a user explain:

  • what happened,
  • why it happened,
  • what to do next?

If yes, design is doing its job.