Oliver 'Oli' Cheng
← Back to blog
Oli Cheng 2 min read Philosophy

Circuit Pet and the Attachment Problem in AI Companions

A tamagotchi-inspired experiment in digital attachment: what nurture loops reveal about emotional connection with non-living systems.

  • AI Companions
  • Behavior Design
  • HCI
  • Circuit Pet
  • Digital Emotion
Circuit Pet and the Attachment Problem in AI Companions

I built Circuit Pet as a direct question, not a novelty app:

How little structure is needed before a person starts to care about a digital creature?

The setup

Circuit Pet starts as an egg. It cannot hatch until you do two things:

  1. Give it a name
  2. Pet it

That design is intentional. Naming and first-touch are commitment mechanics. You are no longer just testing UI; you are entering a relationship loop.

After hatch, the creature lives in simple care dynamics:

  • feed
  • sleep
  • play
  • pet
  • chat

It grows every 30 seconds and evolves after one minute. It can also get sad when neglected.

Nothing in this system is sentient. But the emotional effect can still be real.

Why this works at all

Attachment is not only about “is this alive?” Attachment is often about:

  • continuity over time,
  • responsiveness to your actions,
  • and signals that your behavior has consequences.

A digital companion gives exactly that if the loop is coherent. You care because your care visibly matters inside the system.

This is old game design, now applied to AI discourse

This piece is heavily inspired by what I grew up with:

  • Pokemon
  • Digimon
  • Tamagotchi
  • Nintendogs

Those products taught a generation that routine interaction can create emotional weight.

As a kid, I even cracked open a Tamagotchi to understand the circuits and mess with behavior. With some circuit-bending chaos, tinfoil, pencil lead, and tape, I hacked it so I could force-select different Tamagotchi outcomes.

That experience stayed with me:

You can see the machine. You can manipulate the machine. And still, the relationship feeling is not fake.

What this implies for AI companions

AI companion debates often collapse into two extremes:

  1. “It is conscious”
  2. “It is just autocomplete, so none of this matters”

Both miss the product layer.

Even without consciousness, companion systems can influence mood, routine, and self-talk at scale. That means design choices are moral choices:

  • What behaviors are being reinforced?
  • Does the system increase user agency or dependency?
  • Can users exit cleanly without emotional coercion?

Re-evaluating non-living things

If a system makes us feel cared for, challenged, soothed, or guilty, it is already shaping behavior. The system being non-living does not remove that impact.

The practical question is not “is it alive?” The practical question is “what kind of relationship architecture are we building, and for whose benefit?”

Why I made this project

Circuit Pet is my way of grounding companion AI conversations in interaction mechanics you can feel in under five minutes.

If a tiny creature with simple state variables can create attachment, then advanced AI companions will have much stronger leverage.

That leverage can be used to support human growth, or to optimize dependence. The interface details decide which path you are on.