Build Log: From Interview Notes to an LLM-Powered Roadmap Assistant
How I converted messy customer conversations into a decision-ready roadmap copilot.
- Build Log
- LLM
- Product Ops
This was a six-week sprint to answer one operational pain point: how do we stop losing user insight between interviews and roadmap planning?
Most teams collect good qualitative data, then drop signal during synthesis. Notes are fragmented, quotes are hard to trace, and planning meetings devolve into strongest-opinion wins. I wanted a system that made prioritization traceable.
Goal
Turn messy interview artifacts into recommendation cards with explicit evidence trails.
Architecture snapshot
- Ingest call notes and support threads.
- Normalize findings into jobs-to-be-done statements.
- Cluster pain signals by segment and urgency.
- Generate recommendation cards with confidence and downside notes.
Data model
{
"segment": "Seed founder",
"job": "Validate retention before hiring growth",
"pain": "Analytics setup takes two days",
"frequency": 17,
"urgency": "high"
}
What failed first
The first output wave looked polished but was operationally weak. Recommendations were plausible yet hard to trust because they lacked explicit provenance.
The fix was structural, not cosmetic:
- every recommendation required source quotes
- every recommendation required a confidence label
- every recommendation required a downside note
That changed the planning dynamic immediately.
Outcome
Roadmap synthesis time dropped from roughly 6 hours to 90 minutes per cycle.
More importantly, disagreement quality improved. Instead of arguing from memory, the team debated tradeoffs against visible evidence.
What I would improve next
The next iteration adds an evaluator to flag low-evidence recommendations before they reach planning docs. I also want segment drift alerts, so historical assumptions are challenged when new data shifts.
Bottom line
The value was not “LLM magic.” The value was building a repeatable decision interface on top of noisy qualitative data.
That is the real product lesson: AI is useful when it compresses ambiguity into inspectable structure without deleting nuance.
Best,
Oli
May 16, 2025