There's a low hum running through most conversations right now: The anxiety that if you're not mastering the latest AI tools, you're falling behind. Everyone else is getting ahead. You're slipping.
In my opinion, that is misdirected.
Confession time
I'll admit it, I use AI. I use Claude, I use Gemini; I’ve played around with Claude Code and Figma Make. But not for thinking. I do that myself.
I have a Notes file called "b-sides and rarities" — I make a new one every year to collect random bits and bobs. I’ve been doing this since 2021. That's where ideas live. I voice type into it, scribble on sticky notes to type in later, lots of ways to input. But mostly it’s I put something together, sit with it for a few days, and then do something next with it. My methods vary.
I might string together a quick note and ask Claude to summarize it. Or I'll request a few quick drafts in different directions and see what lands. I use Claude as a last-mile editor too — smoothing things I've already thought through, using a version of this prompt:
Don't rewrite significantly or add invented facts. Focus on improving format, grammar, clarity, flow, and removing redundancies. Keep it concise. It should feel human, not like fake rubbery AI.
My actual method: I think. I sit in that tension. I research some. I document that or I free write. Then, I might use AI to smooth over what I've already thought through. Or prompt it to critique — one of my most common requests. It can help me zoom out. I use it as an addition, not a handover.
Still, I keep hitting moments with clients where we’ve made something paired with AI — or even not with AI — and something feels off. Even for things that are straight ahead and tactical.
Two examples
A few weeks ago, I was reviewing 11 headline options Claude generated for the How This Works co page on the Bullseye Customer Sprint page. I'd started with five (5) I’d already written and requested 11 others. And they were fairly okay as outputs, logically sound. Hit the right points.
But I read through them and thought, “They don’t feel right.” Off somehow. Like a copy of a copy done too many times. Or like someone wearing clothes that don’t quite fit.
So I stopped playing the editor and evaluator. I didn't think about whether the structure was good or the logic worked. Instead, I took a minute and closed my eyes.

The gap between "logically sound" and "actually lands" shows up immediately when you critique each option.
I pictured my actual customers. The ones I've worked with. Bootstrapped and funded early-stage founders, 6-12 months of runway, staring at conflicting user feedback. Exhausted.
They don't want a wild goose chase. They want to figure out their product with evidence, not hope, fingers crossed. They want relief.
That distinction — relief, not excitement — didn't come from analyzing AI outputs or refining prompts. It came from my working with actual humans who were referred to me by other humans who trust me. From talking to them about why they signed up for work. I heard their fatigue, overwhelm. The 57 browser tabs. The specific triggers that would make them receptive to what I offer.
So I rewrote the headline in a few minutes based on that. Then, fed it back to Claude for polish. Which felt right.
What your gut knows that text patterns don't
AI is great at pattern matching. It's learned what words tend to follow other words in what contexts. Your gut is great at pattern recognition that comes from “lived experience.” Those are different things.
AI can generate a user flow that follows logic. But it can't predict where users will feel confused because it's never experienced confusion. And it can’t feel the frustration in trying to tap a too small button on your phone with freezing fingers while walking a dog in a New York City winter.

The actual conditions your customers might navigate, not the unexperienced version that AI works from.
AI can construct a positioning statement that most people think are the right parts. It's ingested many examples. But it's never felt the tension of actually speaking those words to another human. It's never needed to repair trust after making a mistake.
And it’s not just generating text. I'm working through this on a client project. Last month, my team built a high-level architecture document and workflow prototype for lawyers. We'd done preliminary interviews and workshops, pulled insights from them, and incorporated everything into the docs. But something was missing — the whole throughline between what we'd learned and what we'd built.
So I doubled back in our last meeting. Asked questions again. Because we needed to be sure. The built architecture was firm and the prototype thesis was clean. But it was dancing around the actual tension the lawyers had described — the specific moments where their current process breaks down and costs them time.
I asked:
What do you do first thing on a Monday to start your week?
Walk me through the last case where something went wrong. What happened? When did you realize it? What did you do?
If you could wave a magic wand and change a single thing about how you work today, what would it be?
Yes, we'd asked questions like this before, but I needed a refresh. I needed specific scenarios. Now we're in the process of rejiggering the specification into a current state journey map and then a future state one. We want to show how the proposed prototype actually solves for the tension they described — not the idealized version buried in a mountain of text.
That shift didn't come from refining the architecture. It came from remembering what we'd heard, felt, and seen when we sat across from those lawyers the first time. Then, asking for a refresh.
So, we’ll connect the dots, shifting slightly what we'd built.
Feeling that gap between what looks right and what actually lands? Let’s work through it together.
Stop treating tool adoption like a survival metric
You probably don't need to know Cursor better — sorry, Cursor. You don't need to master the latest prompt engineering technique. And you definitely don't need to feel anxious about what you're not doing with AI.
What you need is to trust this: when something feels off about an AI output, that feeling is signal. Not noise. Not overthinking. Follow the signal.
Something else to consider
"If we want AI to help people at work, consider making more cranes, and fewer looms."
That's not just advice for AI builders. It's permission for you to decide what kind of tool AI is in your hands.
You (and your body and gut and mind) know things about your audience that text patterns don't capture. Your experience of sitting across from actual humans gives you prediction power that no LLM has access to. That's not a limitation you're working around. That's your actual advantage.
Use AI for what it's good for. Polish. Zoom out. Riff on directions. Play with options. Leverage its machine learning while also remembering the limitations.
But keep your thinking. Pay attention to your gut. Keep the moments where you close your eyes and actually imagine what it would feel like to be on the receiving end of what you're making please.
When something feels "right but not right," stop. Don't keep prompting. Don't fiddle with the output. Close your eyes. Go for a walk. What does your embodied experience tell you? Where does it snag? What would actually land?
Trust that. That's the work AI can't do.
My question
I want to know: Where have you felt this difference? Or the flip side — where have you trusted your embodied knowledge over the AI’d output and been right?
Hit reply. I'm curious what you're noticing.
Until next time,
Skipper Chong Warson
Making product strategy and design work more human — and impactful
—
Ready to help your team understand when to trust AI outputs and when to trust embodied judgment? Book an intro call with us
If someone forwarded this to you and you want more of these thoughts on the regular, subscribe here


