I was updating customer interview insights for a client on Miro last week and I realized I've run 80 customer interviews in the last year. Across seven (7) bullseye customer sprints plus several ongoing weekly interview series. And how I started running them isn't how I run them now.
The first 25-30? Pretty manual. No AI. Just me, Zoom transcripts, and lots of sticky notes on a Miro board.
The interviews themselves? Those haven't changed much. Still 60 minutes, half discovery and half prototype testing. Still streaming live to client teams. Still taking notes.
But what happens before and after everyone logs off Zoom? That process has gotten sharper with every sprint. Here's my process.
Before the interview: write down what you think they'll say
Before each interview, we jot down what we think the participant will say. Not elaborate predictions. Just a few bullets in the interview guide notes.
Things like:
"Probably uses spreadsheets for this"
"Might mention frustration with approval process"
"Will likely prioritize speed over accuracy"
Takes a few minutes. But it changes how we listen. And how we’re able to talk plainly about what we thought we knew and what we learned. Because when someone says exactly what we predicted, that's validation. When they say something completely different, that's where insights live. The assumptions create contrast. Without them, everything just washes over you as "interesting info."
After the interview, the first thing we do in the debrief is pull up those assumptions. What did we get right? What surprised us? What did we completely miss?
The immediate aftermath
The debrief happens right after, while everything's fresh. Fifteen (15) minutes with my client team processing what we just heard.
"Did you catch that part where she said...?"
"I wasn't expecting them to mention..."
In addition to the notes I took during the call, I jot down quick takeaways — patterns I'm seeing, things that contradicted our assumptions. Just enough to jog my memory when I dig into the transcript later.
Getting into the transcript
I conduct the interviews through Zoom and they get recorded and transcribed. I run each transcript through Brian Greene's Redact app at https://redact-delta.vercel.app/ — which strips out personally identifiable information right in my browser. No cloud processing. Just clean transcripts with "Participant" or "Interviewer" or some other label where in the place of names. Same for company names or phone numbers or email addresses.

Redact runs locally in the browser, replacing names and companies with generic labels while preserving the conversation structure.
Before Redact, I'd have to spend 10-30 minutes manually finding and replacing every name and identifying piece of info. Across even 50 interviews? That could've been 25+ hours of find-and-replace.
Now it takes 30 seconds per transcript.
Time to Claude
Then I drop the transcript into a Claude project I’ve set up for customer research. The project includes:
Best practices around interview analysis — cribbed from Erika Hall, Teresa Torres, Tomer Sharon, and Steve Portigal
The current version of the bullseye customer sprint methodology, on version 2.0
Context about what we're trying to learn
The interview guide
My debrief notes, also redacted
Those assumptions we documented beforehand
I also paste the bullseye customer definition — that description of this person who for my client we think will say "just take my money" and why.
Before Claude, I'd read through each transcript, highlighting in Google Docs, pulling quotes that seemed important, trying to remember patterns from previous interviews, keeping a separate document of themes that may or may not connect.
This worked. But it was slow. And I'd miss connections between the second interview and the seventh one because the details from the previous one had faded.
Now, the analysis that used to take an hour or two per interview takes 30-45 minutes. And I catch more because I have my notes. And notes from a team of people who watched the interview live. And Claude has the transcript in tow — plus the context from all previous interviews in the project.
With all that context loaded, I can ask Claude to surface patterns across all transcripts. "What trigger events have come up repeatedly?" "Where do the value propositions diverge?" "What unstated context keeps appearing?"
Struggling to make sense of what customers are telling you?
Why I generate more insights than I present
The first insights spotted are usually the obvious ones. The customer said something directly. It got written down. Done. Moving on.
Breakthrough insights? Those emerge from patterns across multiple conversations, from what people didn't say, from the moment their energy shifted.
So I ask Claude for 10 insights knowing I'll present 3-4 to my client team. Claude's initial output becomes raw material — I throw most of it away as I dig deeper into what actually matters. And when I present the 3-4, I'm confident they're the most valuable, not just the most recent.
The 10-to-3 ratio also buffers for insights that seem important alone but don't connect to anything actionable. Sometimes customers mention fascinating things that don't help us make better product decisions. Those go in the archive.
What I'm hunting for
Trigger events — What changed in their world that made them receptive to a new solution? In one recent sprint, all Bullseye participants mentioned a regulatory change from a few months earlier. That timing mattered — explained why they were suddenly open to solutions they'd ignored for years.
Jobs-to-be-Done clarity — what are they hiring a solution to do? What job were they trying to accomplish when they fired their previous solution? I interviewed someone who'd fired three (3) project management tools in two (2) years. Wasn't about features — was about forcing their team to change workflows. The job wasn't "manage projects better," it was "get team buy-in without a fight."
Value proposition resonance — which approach got them leaning forward? "That's interesting" means nothing. "Oh, that would solve the problem where..." means everything. I watch for the shift when they stop being a research participant and start mentally using the tool.
Unstated context — this is the stuff so obvious to them they don't mention it. One client kept hearing "we need better reporting" until we learned that "reporting" meant "CYA documentation for compliance audits," not "insights for decision-making." Completely different problem.
Pattern breaks — when someone contradicts what the previous three people said, that's interesting. In one sprint, four (4) people said they'd pay for a solution. Then, the fifth said, "I'd just build it internally in a weekend." That outlier forced us to sharpen the bullseye definition — we were accidentally talking to people with in-house technical resources.
Synthesis happens in layers
After analyzing individual interviews, I start looking across the set. Usually after five (5) conversations, patterns become visible. By interview 10, I'm able to test hypotheses from earlier sessions.
This is why I adapted the Google Ventures model from all-interviews-in-one-day to 2-3 interviews twice in one week. That gap lets patterns surface. Gives my client team time to process and come back with sharper questions. It’s also a grueling day to do it in one shot.
Between sessions, I'm updating a working document that tracks:
Emerging themes
Quotes with timestamps
Contradictions
Questions for future guides
This becomes the foundation for the final deliverable. But it's messy, in-process, full of threads that don't connect yet.

The principles I follow when analyzing customer interviews — start with what you observed, show the pattern across contexts, name the underlying system, and sound human.
What the client receives
After a bullseye customer sprint — typically five (5) interviews over two weeks — clients get:
Individual interview recordings — posted as unlisted YouTube videos with VTT transcripts. Team members who couldn't attend live can watch at their own pace. More importantly, they can search the transcript for specific topics or terms. While I encourage them not to look at the prepped materials so they can draw their own conclusions, I know some don’t.
Synthesis document — 3-4 key insights with supporting evidence from each interview. And no slide decks. Who needs one more slide deck? Each insight includes quotes from at least two (2) different interviews, the pattern it reveals, and why this matters for their specific bullseye customer. Right into a Google Doc.
Bullseye customer definition, to be updated — we started with assumptions and a working hypothesis. Something like: "Infrastructure ops manager at a mid-market financial services company. Manages a small team (3-5 people). Player-coach role. Reports to engineering leadership but measured on customer satisfaction."
After five (5) interviews, that definition gets sharper: "Infrastructure ops manager at a regional bank or mid-market fintech, promoted in the last 12 months. Been at the company five (5) years or less. Manages 3+ people but still doing 50%+ execution work. Reports to engineering, measured on CSAT scores and system uptime."
That specificity changes everything about positioning and messaging.
Recommended next steps — might be "run another sprint with a different segment" or "you have enough validation to move forward with prototype B" or "pivot to focus on this trigger event we discovered."
After the Bullseye Customer Sprint, there’s an end of sprint document with next steps. For weekly customer interview segments, I deliver a monthly summary that rolls up patterns across all conversations that month. Same format — key insights, why they matter, what to do about it.
What I've learned after 80 interviews
Most teams already suspect what they need to hear. They just need permission to act on it.
Customer interviews don't usually reveal completely unknown problems. They validate which problems actually matter versus which ones just make sense in conference rooms. They show you who will pull out a credit card versus who will nod politely and disappear.
When you can point to a full sprint where customers all said some version of the same thing, stakeholder opinions carry less weight. The customers became your evidence.
That's why I obsess over this process. It transforms "we think customers want this" into "we know customers will hire this solution, here's why."
Something else to consider
"The plural of anecdote is not data."
One customer telling you something interesting is an anecdote. Five (5) customers revealing the same pattern through different stories? That's evidence you can act on.
The work between interviews — documenting assumptions, redacting, analyzing with Claude, finding patterns, cross-referencing — transforms individual stories into strategic clarity. Skip that work and you're just collecting anecdotes.
The interviews are the beginning. What you do before and after? That's what turns research into results.
If someone forwarded this to you and you want more of these thoughts on the regular, subscribe here.
Until next time,
Skipper Chong Warson
Making product strategy and design work more human — and impactful
—
Ready to run your own bullseye customer sprint? Book an intro call with us


