I was listening to a Ryan Holiday talk during my morning commute.
Which, since I work from home, really means an Instagram clip while walking downstairs to make coffee before my morning standup.
In the clip, he tells a story about Ulysses S. Grant that stuck with me.
Grant was riding at night with another officer. The prairie was dark and quiet. Then they heard it: a bone-chilling, almost unearthly sound. A pack of wolves somewhere in the darkness.
The other officer asked him: "How many do you think there are?"
Grant didn't want to betray fear. So he guessed the lowest number he thought plausible. Three, maybe four.
The officer nodded. They rode on.
Later, they found out there were dozens.
The Product Discovery Version of This
I've seen this pattern in product discovery more times than I can count.
A team hears something in user research: a complaint, a pattern, a signal that something is wrong. And instead of counting the wolves, they guess the lowest number that feels manageable.
"It's probably just a few power users." "That's an edge case." "We've heard this before but it hasn't moved the metrics."
They're not lying. They genuinely believe the number is small. But they believe it because they want it to be small: because a larger number would require a harder conversation, a bigger investment, a more disruptive change to the roadmap.
The Noise Problem
The flip side is also real. Product teams can overcorrect in the other direction: hearing every piece of feedback as a signal, treating every complaint as a crisis, and losing the ability to distinguish between the wolves that matter and the ones that don't.
This is the noise problem. And in environments with high feedback volume: active user communities, enterprise customers with strong opinions, internal stakeholders with competing priorities: it's just as dangerous as undercounting.
The skill isn't just counting the wolves. It's knowing which wolves are worth counting.
How to Actually Count
Here's the framework I use:
Frequency: How often does this signal appear? Is it one user, or is it a pattern across user segments?
Severity: When this problem occurs, how bad is it? Is it a minor friction or a deal-breaker?
Breadth: Which user segments are affected? Is this a niche case or a mainstream experience?
Trend: Is this signal getting stronger or weaker over time? Is the problem growing or shrinking?
Alignment: Does this signal align with other signals you're seeing? Or is it isolated?
A signal that scores high across all five dimensions is a wolf worth counting. A signal that scores high on one but low on the others might be noise.
The Courage Part
The hardest part of counting the wolves isn't the framework. It's the courage to report the number accurately: even when the number is uncomfortable.
Grant's mistake wasn't that he couldn't count. It was that he let fear shape his estimate.
Product managers do the same thing. We let roadmap pressure, stakeholder expectations, and our own desire to be right shape what we're willing to see in the data.
Count the wolves. All of them. Then decide what to do about it.
What this looked like in my work
The onboarding KYC compliance work at Tend was a noise-from-signal problem. The compliance system was generating interruptions for users who were actually compliant, because the system couldn't distinguish between a genuinely incomplete profile and a profile that was complete but hadn't yet been verified. I built the compliance state model that separated those two signals, reducing false-positive interruptions by 60% and protecting 30-60 day retention in the highest-risk window.
Read the full case study: Onboarding KYC: Tend