The £185K pop-up store that proved nothing

Jacob Dutton

6 Nov 2025

The Temporary Store test is one of the highest-fidelity experiments in our toolbox, but also one of the most expensive to get wrong.

Most teams think running a pop-up store is just about setting up shop somewhere trendy, opening the doors, and seeing who walks in. They lease expensive retail space in high-traffic areas, stock it with product samples, and measure success by foot traffic.

All this serves to do is rack up massive overhead costs, attract window shoppers who'll never buy, and ultimately, burn through innovation budgets on theatre instead of evidence.

What's a Temporary Store Test?

A Temporary Store test is a short-term retail location opened specifically to see whether customers will actually purchase your product face-to-face. It's not about brand awareness or generating buzz: it's about watching people pull out their wallets.

Done properly, it validates real buying behaviour and uncovers pricing objections. Done wrong, it becomes an expensive marketing stunt that tells you nothing about actual demand.

The key is distinguishing between people who browse and people who buy.

How a consumer brand wasted £185K on a pop-up that proved nothing

A global consumer goods company we work with wanted to test a new eco-cleaning product line before committing to full retail distribution. Leadership loved the sustainability angle, so the innovation team decided to validate demand with a pop-up store before approaching major retailers.

Their pop-up approach:

  • Secured prime location in trendy neighbourhood

  • Designed Instagram-worthy store interior with photo opportunities

  • Stocked full product range (15 SKUs)

  • Ran for 2 weeks with social media promotion

  • 1,247 visitors, 412 email signups

The false validation: Leadership saw over 1,200 visitors and hundreds of email addresses. The Instagram posts got tonnes of likes. Executives were impressed by the "brand engagement." Everyone assumed the concept was validated.

The brutal reality: Only 23 actual purchases during the entire 2-week run. That's a 1.8% conversion rate. Exit surveys showed them that most visitors came in because the store "looked cool," not because they had a problem with their current cleaning products.

As for the email signups; people mainly just wanted a discount code for future online purchases they never intended to make.

What went wrong with their test:

  • Wrong location focus: Chose trendy area for impressions, not target customer concentration

  • No qualification: Visitors had no reason to be there except curiosity

  • Too many options: 15 SKUs meant no clear value proposition testing

  • Instagram theatre: Store designed for photos, not purchase behaviour observation

  • Vanity metrics: Counted visitors and social engagement instead of conversion rates

The test they should have run: A 3-day temporary store in a suburban area with high concentration of families (their actual target). Test only 3 core products. Explicitly promote "try before you buy" with clear pricing. Track not just purchases but detailed conversations about switching costs from current products.

The expensive lesson: When we helped them rerun the test with proper focus, they got 87 visitors instead of 1,247. But 29% of those visitors made purchases. More importantly, the conversations revealed their pricing was 40% too high for mass-market retail and their value proposition needed to emphasise "just as effective" rather than "planet-saving."

Why most Temporary Store tests waste money

Mistake #1: Optimising for foot traffic instead of target customers

High-traffic locations attract browsers, not buyers. A smaller space with concentrated target customers gives better evidence at lower cost.

Mistake #2: Making it about brand awareness

If you're designing for Instagram, you're not testing demand. Every design decision should facilitate genuine purchase conversations, not photo opportunities.

Mistake #3: Testing too many things at once

Full product ranges split attention and muddy results. Test specific value propositions with minimal SKUs to understand what actually drives purchase decisions.

Mistake #4: No structured data collection

Teams rely on "vibe" and general impressions instead of tracking specific objections, price reactions, and competitive comparison questions.

Mistake #5: Confusing interest with intent

Collecting email addresses and counting visitors tells you nothing about willingness to pay. Track actual purchase attempts, including those who nearly bought but didn't.

How to run Temporary Store tests that predict real demand

1. Choose location by customer concentration, not foot traffic

Find where your target customers already are, even if it's unglamorous. A community centre might teach you more than a high street location that costs 10x more.

2. Design for observation, not aesthetics

Your store should facilitate purchase conversations. Set up to watch and listen: What do people touch first? What questions do they ask? Where do they hesitate?

3. Test specific hypotheses with minimal products

Don't bring your full catalogue. Test 2-3 specific products or price points that represent core assumptions about what customers will pay for.

4. Create reasons for genuine purchase decisions

Don't just let people browse. Use "exclusive first-access pricing" or "only available here" to force real purchase consideration, not "maybe later online."

5. Conduct structured exit conversations

For everyone who visits (whether they buy or not), have scripted questions ready. What were they hoping to find? What stopped them buying? What alternatives do they currently use?

Try this next week

Before you commit to running a temporary store, test the same hypothesis with a fake door test or presale campaign first. If you can't get people to express purchase intent in a £5K digital test, a £50K physical store won't magically create demand.

The best Temporary Store tests happen after you've already proven demand digitally. They're for understanding purchase objections and in-person behaviour, not discovering whether anyone wants your product.