Surveys and the polite lies customers tell

Jacob Dutton

4 Sept 2025

Validation Surveys are the most overused and undervalued tool in innovation (and 90% of teams are asking the wrong questions entirely).

Surveys feel safe. They're quick to set up, cheap to run, and produce lots of data that looks convincing in presentations. Teams love them because they generate quantifiable evidence that stakeholders trust.

But most validation surveys are worthless. They measure what people think they want, not what they'll actually do when it matters.

What's a Validation Survey test?

A validation survey is a structured questionnaire designed to test specific assumptions about customer jobs, pains, gains, or product-market fit. Done properly, it provides early signals about whether you're solving real problems. Done poorly, it confirms your biases with meaningless data.

The difference is in how you ask.

How a fintech team got 94% positive feedback on a product nobody used

A fintech startup we were working with inside a bank was building an AI-powered budgeting assistant for millennials. Their validation surveys consistently showed strong demand across all their assumptions.

Here's what they asked, and what they found:

  • "How important is automated budgeting to you?" (87% said "very important")

  • "Would you use an AI assistant to help manage your finances?" (94% said "yes")

  • "How much would you pay for smarter financial insights?" (Average: £12/month)

  • "Rate these features from 1-10 for importance" (All scored 7+ average)

With 94% positive response rates and clear willingness to pay £12/month, they started building a comprehensive budgeting platform. The surveys validated everything including their problem hypothesis, solution approach, and pricing strategy.

But when they soft-launched to their survey respondents, engagement was catastrophic. Only 12% downloaded the app. Of those, 89% never completed setup. The few who tried it used it twice on average before abandoning it completely.

What their surveys actually measured was:

  • Hypothetical importance of financial topics people know they should care about

  • Polite responses to questions about AI (which sounds innovative)

  • Aspirational spending on self-improvement

  • Generic feature ratings with no trade-off context

Through real user interviews, they learned that people already had budgeting methods that worked "well enough." They wanted financial peace of mind, not more financial management tasks. The AI assistant felt like homework, not help.

So, instead of budgeting assistance, they pivoted to "worry-free" financial monitoring (alerts when something unusual happened, not daily budget tracking). Simple reactive notifications, not proactive planning tools.

Why validation surveys mislead teams

The core issue isn't that surveys are inherently flawed, it's that most teams design them to confirm what they hope is true rather than challenge what might be false. There are five systematic problems that turn surveys into bias confirmation tools:

  1. The social desirability problem: People give answers that make them look responsible, forward-thinking, and rational (nobody admits they don't want to budget better or improve their finances!).

  2. Hypothetical vs. actual behaviour: Surveys ask what people think they would do, not what they actually do when faced with real choices, time constraints, and competing priorities.

  3. No trade-off context: Rating features in isolation doesn't reveal what people will choose when forced to pick between options or invest limited time and attention.

  4. Leading question design: Most teams unconsciously write questions that guide respondents toward answers that validate existing assumptions.

  5. Missing the 'why' behind stated preferences: Surveys capture what people say they want but miss the underlying jobs, contexts, and constraints that drive real behaviour.

How to design validation surveys that reveal the truth

The solution isn't to abandon surveys entirely, it's to design them like behavioural experiments rather than opinion polls. Instead of asking what people think they want, ask questions that reveal what they actually do when faced with real constraints and trade-offs:

  1. Test specific scenarios, not general preferences Instead of "Would you use automated budgeting?" ask "When you overspent last month, what did you do the next day?" Real past behaviour is the best predictor of  future actions.

  2. Force ranking with trade-offs Don't ask "Rate these features 1-10." Ask "You have 30 minutes per week for financial management; rank these in order of what you'd actually spend time on."

  3. Include behaviour validation questions Follow up stated preferences with actions: "You said budgeting is important, what budgeting method did you try last? When did you last use it?"

  4. Test willingness to sacrifice, not just willingness to gain Ask "What would you stop doing to make time for better financial planning?" rather than "Would you like better financial planning?"

  5. Use the Sean Ellis disappointment test For existing products: "How disappointed would you be if this product disappeared tomorrow?" 40%+ saying "very disappointed" indicates real attachment.

What good validation surveys actually measure

Rather than testing hypothetical preferences, good validation surveys focus on revealing existing patterns of behaviour and genuine constraints. They look for evidence of what customers already do, not what they say they might do:

  • Jobs customers are already trying to do: "When you last needed to [specific outcome], what steps did you take?" reveals existing behaviour patterns and workarounds.

  • Pain intensity through revealed preference: "What's the most you've spent trying to solve [specific problem]?" shows how much pain is worth in real money.

  • Current solution satisfaction gaps: "What's frustrating about how you currently [achieve outcome]?" identifies specific improvement opportunities.

  • Behavioural consistency: Cross-reference stated preferences with past actions to identify gaps between aspiration and reality.

  • Contextual constraints: "What would prevent you from [desired behaviour] even if you wanted to?" reveals real-world barriers.

When validation surveys work best

Validation surveys are most effective as early-stage signals rather than definitive proof points. They work particularly well in four specific contexts where the limitations of survey data are less problematic:

  1. Early assumption testing: Quick validation of basic problem-solution hypotheses before investing in prototypes or detailed research.

  2. Priority ranking with existing users: Understanding what matters most to customers who already experience your value proposition.

  3. Feature trade-off decisions: Helping customers choose between specific, concrete options rather than rating abstract concepts.

  4. Sentiment tracking over time: Measuring changes in satisfaction, recommendation likelihood, or disappointment ratings with consistent methodology.

Try this next week

Take a current innovation project and design two surveys: one asking what people want, another asking what they actually do.

Compare "Would you use [solution]?" with "When you last faced [specific problem], what did you try?" The gap between stated interest and revealed behaviour shows you where your assumptions need testing.

Validation surveys work when they test real behaviour and actual constraints, not hypothetical preferences and aspirational thinking. Most teams survey what customers say, but successful teams survey what customers do.