Oliver 'Oli' Cheng
← Back to projects
2026 live

LLMPrism

Privacy-first multi-model AI cockpit for side-by-side comparisons across OpenAI, Anthropic, and Gemini.

  • React 18
  • TypeScript
  • Vite
  • Web Crypto API
  • Local-first UX

Problem

Model comparison is fragmented across tabs and tools, making prompt iteration slow and noisy.

Solution

Built concurrent multi-run execution with side-by-side panes, merge workflow, and local encrypted key storage.

Impact

Speeds up model selection and prompt refinement while keeping all data local by default.

LLMPrism is focused on practical model evaluation workflows, not one-off prompt demos. Read the companion essay: Philosophy of LLMPrism.

Core capabilities

  1. Concurrent runs across 2-3 models
  2. Side-by-side output comparison with latency and token context
  3. Merge selected content into a final document
  4. Demo mode for quick evaluation without API keys

Engineering notes

  • Provider adapter pattern for multi-model consistency
  • AES-256-GCM key encryption via Web Crypto API
  • LocalStorage persistence with no telemetry pipeline

Demo Mirror

Live Preview

Mini preview of the actual demo. Use the launch button for full-screen interaction.

Open Demo