Why everyone gets fake door tests wrong
Jacob Dutton
28 Aug 2025

The Fake Door test is the most popular validation method we see amongst corporate innovation teams and also one of the most widely misused (and misunderstood).
Most teams think Fake Door testing is as straightforward as building a landing page, driving traffic and measuring sign-ups. They build generic "coming soon" pages, share them on social media, and count the email addresses they collect as validation.
All this serves to do is generate inflated sign-up numbers that mean nothing, thousands of email addresses from people who'll never buy and ultimately, false confidence that leads to expensive failures.
What's a Fake Door test?
A Fake Door test is a simple landing page that presents your venture, product or services as if it exists, captures interest through email sign-ups, then reveals it's not ready yet.
Done properly, it validates genuine demand and tests your positioning. Done wrong, it collects meaningless email addresses from curiosity seekers.
The key is figuring out the difference between real interest and idle browsing.
How a SaaS company wasted £400K on fake validation
A B2B SaaS we work with wanted to add AI-powered analytics to their project management platform. Traditional market research suggested there was high demand for AI features, so they decided to test with a fake door approach before building.
Their fake door approach:
Built generic landing page: "Revolutionary AI Analytics - Coming Soon"
Promoted across LinkedIn, industry forums, and email lists
Simple email capture with "Join the beta waitlist"
2,847 sign-ups in 6 weeks
The false validation: Leadership saw nearly 3,000 sign-ups and approved 18 months of development budget. The numbers seemed compelling, afterall, if thousands of people signed up, there was clearly massive market demand.
The launch reality: When AI Analytics launched, actual uptake was horrendous. Only 3% of waitlist subscribers signed up for trials. Of those who tried it, 89% cancelled within 30 days. Exit interviews revealed the brutal truth: people signed up because "AI analytics" sounded interesting, not because they had a specific problem it solved.
What went wrong with their test:
Generic positioning: "AI Analytics" could mean anything to anyone
No qualification: Anyone could sign up regardless of actual need
Curiosity clicks: People joined the waitlist for a thing they'd never buy
Wrong traffic sources: Industry forums attracted browsers, not buyers
The real test they should have run: A specific landing page testing "Predict which projects will miss deadlines 2 weeks in advance" with qualification questions about current project management pain points.
The expensive lesson: When we helped them reposition the test with a specific problem focus and better qualification, they got 312 sign-ups instead of 2,847. But 67% of those converted to paid users. The smaller, qualified audience represented actual demand worth building for.
Why most Fake Door tests produce junk data
Mistake #1: Testing everything for everyone
Generic value propositions like "Powerful AI tools" attract curious browsers, not potential buyers. Specific problems attract people who actually have those problems.
Mistake #2: Making sign-up too easy
No friction means no qualification. Anyone will give you their email for something that might be interesting someday. Add qualifying questions to filter genuine interest.
Mistake #3: Wrong traffic sources
Social media and forums attract browsers. Industry newsletters and targeted ads reach people actively looking for solutions.
Mistake #4: Measuring vanity metrics
Sign-up volume doesn't predict purchase behaviour. Focus on engagement quality and post-sign-up actions.
Mistake #5: No follow-up validation
Teams collect emails and assume demand is proven. Survey or ideally speak to your sign-ups immediately to understand why they registered and what they expected.
How to run Fake Door tests that predict real demand
1. Test specific problems, not broad categories
Instead of "Marketing Automation Platform," test "Send personalised emails based on website behaviour." Specific problems get specific audiences who actually need solutions.
2. Add meaningful qualification
Don't just collect emails. Ask about current solutions, budget authority, timeline for implementation. This filters genuine prospects from casual browsers.
3. Drive qualified traffic
Target people actively searching for solutions, not passive social media audiences. Use search ads, industry publications, and partner referrals.
4. Measure engagement depth
Track what happens after sign-up: email open rates, survey completion, sales conversation requests. Deep engagement predicts purchase behaviour.
5. Interview your sign-ups immediately
Within 48 hours, survey a sample of people who signed up. What problem were they hoping you'd solve? What alternatives are they considering? This qualitative data is a validation goldmine.
Try this next week
Take an idea you're working on and create two fake door tests: one generic ("Smart Business Analytics") and one specific ("Identify customers likely to churn next month"). Drive similar traffic to each and measure not just sign-ups, but engagement quality.
The specific version will likely get fewer sign-ups but more qualified interest. Those engaged prospects represent real market demand worth building for.
Fake Door testing works when it attracts genuine prospects, not casual browsers. Most teams optimise for volume, but the most successful teams we work with optimise for qualification
More articles
Explore more insights from our team to deepen your understanding of digital strategy and web development best practices.