Last spring, I needed to recruit bullseye customers for a sprint. This was my second client and I wanted to move fast. So I posted a Google form to LinkedIn and a few Slack communities to find the bullseye customer we’d formulated over a few sessions, a narrow kind of ideal customer persona (ICP).

This is what I asked for. What I got back needed triple-checking.

And within days, I received 82 responses. Among them, 12 perfect matches. What I didn't realize at the time was (but my current suspicion is) that most of these matches were AI-generated answers gleaned from parts of my LinkedIn post. Not to mention the fake profiles encountered and concocted credentials.

But I’m getting ahead of myself. I went ahead and scheduled the first five (5) interviews and felt good that things were moving along.

82 responses, 12 perfect matches on paper. But the actual experience tells a different story.

Then the interviews started. And everything fell apart.

The first person never showed up. No message, no response to my email, Do you want to reschedule?

The second person arrived 10 min late, claimed to be in Ohio. But his resume said New York City. I started by saying, it's pretty early for you. It was 9:30 AM on the east coast which meant it was 8:30 AM in Ohio. He shrugged it off. And his background felt off, specifically the lighting in his room — a bright white overhead light, clearly indoors, with closed curtains, which felt very different from somewhere it was morning. I should also point out that Sam’s resume (below) was flawless: five (5) years of Product Operations experience, names of companies, legitimate progression, every certification you'd want to see. But when I asked basic questions about KYC (Know Your Customer) compliance, what kinds of work he did for financial services clients working in GitHub (who said he wrote code even though we were looking for non-coders), and repositories he might be able to talk about, even show me, he had nothing but generic answers. No specifics. So we wrapped early, about 10-15 min in.

Samuel Smith’s resume that looked perfect on paper. Generic answers in the actual interview.

The third person sent an email about a medical emergency and wanted to reschedule. Which left me 3/5 and burning time. So I did what I should have done earlier: I called a timeout and said we should have a quick huddle.

We discussed what was happening. It felt off. The screener data looked perfect, but the actual humans on the other end didn't match the profiles. So I reached out to the remaining "perfect match" participants for quick video calls to confirm their experience and context for the upcoming interview. I also wanted to verify that they were real before the full interviews.

Not one of them responded. Plus, the couple who’d wanted to reschedule from the first set.

Then, I tried the seven (7) strong matches. 6/7, no response. And remember there was an incentive attached, $120 for an hour is what I recommend. Same thing as what Google Ventures does.

One person did respond from that group, but not in the way I’d expected. They didn't say no. They asked: "Are you racist?" No context. No explanation. Just that.

When I reached out to verify participation, most ghosted me. This one responded with hostility.

I don't know what prompted this response. Maybe they felt targeted, maybe they were testing something. Either way, this was a very clear sign for me that this was the wrong path.

Now, a week was gone. Client time (and my time) wasted. And I'd let people down because I'd convinced myself the data was enough.

Trust the feeling

Here's what I couldn't ignore: something felt off from the first interview, even though no one showed up. But I didn't trust that feeling because the screener told me everything should be fine. I had data, numbers, and matches. So I pushed forward.

The funny part? I know better. I've recruited for hundreds of projects. I've used User Interviews, Respondent, Dscout, a bunch of different tools. But I thought I could spot the problems ahead of time with a good screener. But in 2025, a Google form isn't vetting anymore — it's more like a sticky fly trap. It's a good start but it doesn't solve the whole problem.

Since then, we've run five (5) more sprints using userinterviews.com and done about 80 interviews. Almost zero problems. Sure, a couple of no-shows, but no Samuel Smiths with impossible lighting for that time of morning in Ohio.

But that week taught me something worth naming: when your gut says something's off, that's signal. Not noise. It might be overthinking — what isn’t right now — but it's worth paying attention to. I had 82 people who filled out a form and looked good on paper. Maybe some number of bots. I dunno. But the actual humans didn't match what they claimed. I ignored it because I thought volume plus my experience to back me up.

The bigger thing I'm sitting with

How do we keep the accessibility and speed of recruitment without drowning in noise? Paid platforms solve some of it, but they're not always an option for early-stage teams running their first sprint on a tight budget. And even when you use them, they don't solve for everything.

The real issue is that I had data telling me one story and then humans telling me another. I chose to believe the data because it felt more scientific. But my gut was already pointing at the gap.

One thing that I think’s changed is the sophistication of the fakes. The Samuel Smiths aren't just people who don't show up — they're fabricated to pass initial screening. They're pulling answers from your LinkedIn post, crafting resumes that match your requirements line by line. And the clarity I had in my screener? The funny thing is that it probably made it easier for them to match. Even with crystal clarity on your end, you've got to verify on theirs too.

Ready to run a sprint without the recruitment headache?

Something else to consider

"Intuition will tell the thinking mind where to look next."

Jonas Salk

Here's what I missed: my intuition wasn't telling me to ignore the data. It was telling me where to look next — at the humans behind the responses. The screener gave me a direction (82 people). My gut said, "But are they real? Do they actually know what they claim?" I think that’s what Salk meant.

The thinking mind can process 82 responses and find 12 almost perfect matches. But intuition catches what data misses — the lighting that doesn't match the timezone, the resume that's too polished, the silence when people should respond. Intuition says, "Look deeper here." The thinking mind then asks the right follow-up questions.

I had both. I just trusted the wrong one.

Hit reply. I'm curious what you're noticing.

Until next time,

Skipper Chong Warson

Making product strategy and design work more human — and impactful

Ready to understand when to trust your gut over your data? Book an intro call with us

If someone forwarded this to you and you want more of these thoughts on the regular, subscribe here

Keep reading

No posts found