Circuit Pet and the Attachment Problem in AI Companions
A tamagotchi-inspired experiment in digital attachment: what nurture loops reveal about emotional connection with non-living systems.
- AI Companions
- Behavior Design
- HCI
- Circuit Pet
Writing
No fluff. Just what worked, what broke, and the frameworks I use to ship AI products with real user value.
Showing all 31 posts.
A tamagotchi-inspired experiment in digital attachment: what nurture loops reveal about emotional connection with non-living systems.
What an overhead herd simulator reveals about local rules, global patterns, and why coherent behavior can appear without a central controller.
Horse Flock Simulator explores emergence: how local interaction rules produce coordinated global behavior without central command.
How I think about behavior-shaping products without manipulation theater: agency, constraints, feedback loops, and measurable improvement.
A practical philosophical stance on AI anthropomorphism, Turing-style behavior tests, and what they do and do not prove.
A no-nonsense map for shipping AI features that survive real users, constraints, and messy context.
How to write prompts as contracts: role, goals, constraints, and failure behavior that hold up in production.
The principles I use to keep AI systems readable, reliable, and fast enough to ship every week.
The personal assistant project is built as an adaptive interface scaffold: fast iteration, explicit constraints, and durable architecture.
A practical design stance: less UI noise, more user confidence, and better decisions under uncertainty.
Life HUD applies RPG interface logic to real life operations: state visibility, actionable quests, and momentum loops.
How I run lean loops for AI features: define, build, measure, and decide without fake certainty.
Family Tapestry treats lineage as living structure: editable graphs, narrative memory, and relational nuance.
LLMPrism is built on comparative thinking: side-by-side model behavior with privacy-first defaults and no provider worship.
ShipDojo is built on execution realism: move from prototype optimism to production evidence with explicit gates.
A compact operating model for shipping AI products without handoff theater or role confusion.
A token falls through a weighted machine: guided, partly stochastic, mathematically rigid, and still hard to predict path-by-path.
Divine Machine treats AI as a ritual interface: deterministic internals, interpreted meaning, and epistemic humility.
A grounded framework for agents: workflows with memory, tools, supervision, and operational accountability.
Why Bonzen is designed as a behavior loop system, not a content library: relevance, recovery, and repeat value.
This is not just model drama. It is a strategic split on product philosophy, safety posture, and developer lock-in.
How I scope one clear end-to-end AI feature for product teams so launches happen fast and hold up in real usage.
Open-ended chat feels flexible, but constrained interfaces usually produce better outcomes for real users.
A lightweight evaluation model to keep AI coaching experiences useful, safe, and measurable after launch.
A builder-focused map of the agent stack in 2025: models, tool APIs, safety posture, and deployment ergonomics.
How I converted messy customer conversations into a decision-ready roadmap copilot.
A framework for measuring whether AI features deliver repeat value instead of one-time novelty.
A practical operating system for moving fast on AI features while preserving user confidence.
What DeepSeek-V3 and R1 signaled about capability, inference economics, and the next competitive layer for AI products.
A practical explanation of RL and RLHF, and why reinforcement ideas are back at the center of AI product quality.
Why the November 2022 and March 2023 model moments still define product quality bars in 2026.