Problem
Teams often assume the text humans read is the same text models parse, which leaves hidden-channel injection risk under-tested.
Theme Studio
Pick a palette + text modeA red-team text lab showing how human-visible copy can differ from machine-visible payloads via hidden channels.
Teams often assume the text humans read is the same text models parse, which leaves hidden-channel injection risk under-tested.
Built an interactive composer + detector that contrasts visible copy with machine-extracted payloads across zero-width and HTML-comment channels.
Makes prompt-injection and context-poisoning mechanics concrete for product, design, and engineering teams during reviews and threat modeling.
Poison Pill is a concept project about model perception, not just model capability.
Open the live demo: Poison Pill.
Humans evaluate visible semantics. Models can also consume hidden semantics when text includes invisible or metadata channels.
If teams only QA the visible layer, they miss part of the threat surface.
This is a practical reminder that AI systems are parsers, not human readers. Secure AI UX has to account for what the model can parse, not only what people can see.
Demo Mirror
Mini preview of the actual demo. Use the launch button for full-screen interaction.
Personal Assistant
Oli's Personal Assistant
Ask about projects, experience, or how I work.
Ready