Back to portfolio

Protocol explainer // model context protocol

AI infrastructure, made legible

MCP Control Room

Model Context Protocol is the standard layer between an AI host app and the tools it can actually use. This demo shows the full chain: request, tool discovery, server call, structured result, grounded answer.

  • Tool discovery
  • Structured I/O
  • Permission boundary
  • Grounded answers

Request

User asks Natural language prompt enters the system.

Host app

Codex Desktop Owns permissions and server connections.

Model

GPT-5 agent Chooses tools based on the request.

MCP server

Filesystem MCP server Exposes tools with a standard schema.

Tool result

search + read_file Structured output comes back into reasoning.
Idle

Ready

Choose a scenario and run the workflow to watch the protocol chain resolve into a grounded answer.

No activity
    Structured JSON
    {
      "status": "waiting"
    }

    The product surface

    Claude Desktop, Codex, Cursor, or your own product can all be the host. The host manages which servers exist, what the model is allowed to call, and how results are displayed to the user.

    The tool gateway

    An MCP server is just a standard bridge around some capability: filesystem access, GitHub, calendar, docs, search, or anything custom your team exposes.

    Less glue, more leverage

    Without a standard protocol, every host app and every integration invents its own contract. MCP makes those integrations legible, reusable, and easier to inspect.