The insight killing problem

Teams think customer interviews are for validating solutions or collecting feature ideas. This leads to questions like "Do you like my idea?" or "Would you use a product that...?"

I my experience, this gets you polite, useless feedback. Rob Fitzpatrick calls these "Mom Test" failures.

I've seen this happen dozens of times. A product team schedules a number of customer interviews, asks about their work-in-progress solution, gets enthusiastic responses, and walks away confident they're building the right thing. And then, some months later, they wonder why nobody's buying.

The goal shouldn’t be validation. It's discovery. You're there to uncover the customer's problems, context, and motivation through their past behavior. You want facts about what they've actually done, not opinions about what they might do with your product in some future version.

So, the two biggest mistakes teams make are:

  • Seeking validation, not facts — asking about their solution rather than the customer's life. "Do you think this feature would be useful?" instead of "Tell me about the last time you tried to solve this problem."

  • Asking for opinions or hypothetical situations — "How much would you pay for this?" instead of "Walk me through how you decided to buy your current solution."

People are terrible at predicting their future behavior. We also tend to be polite when giving feedback on someone else's idea. Past behavior is the only reliable predictor of future behavior.

Business challenge(s) got you spinning?

Why bullseye customer interviews should be different

Traditional user interviews cast a wide net. Bullseye customer interviews focus on those customers that will say some version of “just take my money” — hopefully a sizable market.

This isn't about demographics. It's about finding people who've experienced specific trigger events that make them key for your solution. Someone who just got promoted to VP of Sales has different needs than someone who's been in the role for five years.

The 1.0 approach of the bullseye customer sprint was basically a copied version of the the Google Ventures "Learn More Faster" model with a change or two. Keep in mind the original GV method focused on discovering customer problems. Bullseye Customer Sprint 2.0 builds on that, pulling from April Dunford's positioning work and Jobs-to-be-Done principles. It's about understanding not just what customers do, but why they hire solutions and fire them.

Comparison showing the evolution from Sprint 1.0's basic problem identification to Sprint 2.0's deeper behavioral understanding, including trigger events and solution switching patterns

The key differences:

  • Laser focus on trigger events — recent changes that prime people to be receptive

  • Comparing value propositions, not just features — using prototypes to understand positioning and messaging

  • Live team observation with side-channel influence on the conversation

  • Discovery-first structure — understanding their world before testing any assumptions

When prep misses the point

Here's where teams usually go wrong: they think preparation means writing better questions. Questions matter, but what matters happens before you ever talk to a customer.

At this point, I've conducted hundreds of these interviews across healthcare, travel, fintech, and B2B tools. The quality of your prep determines whether you get breakthrough insights or polite nods.

Interview guide development

For a 30-minute section, I usually keep it to 7-9 questions with a few backup questions. Lots of notes on the side. And the 60-minute structure is always the same, 30 minutes of discovery first (past behavior and context), then 30 minutes for assumption testing with prototypes.

But here's what most people miss — the questions aren't just what you ask. They're your roadmap for staying curious instead of falling into validation-seeking mode. Each question has backup options because real conversations don't follow scripts.

The guide keeps me focused on their world, not my assumptions about what matters to them. It’s a guide, not a script.

Looking at the right kinds and numbers of prototypes

Typically, I’m looking at two prototypes with slightly different value propositions. Sometimes it's a standalone web app, sometimes rough wireframes, sometimes a Figma prototype. The format matters less than having two distinct approaches to compare.

Why two instead of looking at one more in-depth? The comparison is key. And in a 25-minute prototype section, two gives you enough contrast without overwhelming people. I've tried three in shorter sessions and that works okay but I find that two is the sweet spot to avoid surface-level feedback.

Keep in mind, the prototypes aren't about testing usability. They're conversation starters that reveal what customers actually value versus what they say they value.

Often, after the prototypes, something else will come up that got skated over in the first 30 minutes. Best to keep the space to retread previous ground to get a deeper picture of their context.

Pre-interview research

This is where I probably spend more time than most people expect because it's where the real insights start forming.

For each participant, I:

  • Scan their LinkedIn for patterns — how long they've been in their last few roles, the types of accomplishments they highlight, industry context that might shape their perspective

  • Review their screener responses, especially their answer about their biggest challenge right now

  • Look for conversation hooks — shared connections, recent job changes, company news that might affect their priorities

This isn't stalking. It's prep. When you understand someone's professional context, you can ask better follow-up questions and connect their responses to their actual world.

The streaming setup

Here's something that sets my approach apart: while I'm conducting 1:1 interviews, my clients observe live through a stream. I get permission to record before we start, but I don't mention the live audience to keep the conversation natural.

My current setup: sharing Zoom screen through Google Meet to the client team. I previously used a more complex Zoom → OBS → Restream → YouTube Live chain, but the simpler approach works better. 95% reliability versus 80%. Even with the double video encoding.

But than the how, is the why around live streaming — the client team can send me real-time follow-up questions through a side channel (Slack, Discord, Messages). This directly influences the conversation in real-time without breaking the intimate 1:1 dynamic.

After the interview, I post the recording as an unlisted YouTube video with the VTT transcript for team members who couldn't attend live.

Facing a messy problem? Outside perspective can help

The reality check around scheduling

I've adapted the GV model from doing all interviews in one day to 2-3 interviews twice in the same week.

Google Ventures has leverage I don't have. When GV says "we're doing research," portfolio companies clear their calendars. When an external facilitator like me says it, teams need more flexibility.

The modified schedule works better anyway. Teams can process insights between sessions and come back with sharper questions. And frankly, watching five hours of customer interviews in a row is mentally exhausting. Breaking it up keeps everyone more engaged.

Why external facilitation isn't a “nice to have”

Teams often ask why they need someone external to do customer interviews. The answer comes down to objectivity that it’s hard for internal teams to achieve.

  • You bring baggage. When you're building something, you want it to succeed. That desire subtly influences how you ask questions and interpret answers. Customers pick up on this energy and adjust their responses to be more helpful, which means less honest.

  • They want to help you. When customers know you built something, they become co-conspirators in your success. They'll soften criticism and amplify praise. An external facilitator doesn't trigger this dynamic.

  • You miss the subtext. Internal teams focus on what customers say about their product. I focus on what customers reveal about their world — the context that determines whether your solution actually fits their life.

I've seen product teams convinced they had strong product-market fit because customers said positive things about their prototype. But it was guilt or misguided information. And when I dug into the discovery conversation, those same customers revealed they'd tried similar solutions before and abandoned them. Or that my client’s competition was doing nothing, it just wasn’t dire enough. That context changes everything.

What clients experience in real-time

Observing live creates something that recorded sessions can't: immediate team alignment around what matters to customers.

I watch product teams have real-time "aha" moments when they see their assumptions challenged by actual human behavior. A product manager discovers the feature they've been debating for months solves the wrong problem. A founder realizes their key differentiator isn't what customers care about.

The side-channel messaging lets the team dig deeper on surprising responses. If a customer mentions something unexpected, the team can ask me to follow up immediately rather than waiting weeks to schedule another interview.

Most importantly, teams leave these sessions with shared understanding. No one can later claim "that's not what the customer meant" because everyone witnessed the same conversation.

Facing a messy problem? Outside perspective can help

Something else to consider

"It is difficult to get a man to understand something when his salary depends upon his not understanding it."

Upton Sinclair

This captures why internal teams struggle with customer interviews. When you've spent months building a feature, your career success depends on that feature working. Even with the best intentions, that creates unconscious bias in how you ask questions and interpret answers.

External facilitators don't have skin in the game. We have no emotional attachment to your prototype succeeding. That objectivity lets us ask the harder questions and hear the uncomfortable answers that lead to breakthrough insights.

The six week ROI

Why dedicate six weeks to this process instead of just building and learning from real usage data?

Because the cost of building the wrong thing grows exponentially. It costs 10x more to fix a problem in development than in design, and 100x more to fix it after launch with real customers.

But here's the deeper value: six weeks of customer discovery creates clarity on priorities that cuts through all the noise. Stakeholder requests, competitor feature envy, internal opinions about what users "obviously" want — it all becomes secondary to what you've learned about your bullseye customers.

When teams understand their bullseye customer deeply — their triggers, context, and real motivations — they stop building features that seem logical and start building solutions that customers actually hire.

The prep work isn't just helpful. It's what separates actionable insights from polite lies.

In a world where most product decisions are still based on internal opinions rather than customer evidence, that difference determines who builds products that actually matter versus products that make sense in conference rooms.

Until next time,

Skipper Chong Warson

Making product strategy and design work more human — and impactful

Next time, I'll walk through what happens after the interview — how to pull insights from hours of conversation and turn them into product decisions that stick.

Need outside thinking on a tough business challenge? Book an intro call with How This Works co

Keep reading

No posts found