Stop debating, start multivariate testing

Jacob Dutton

10 Jul 2025

Multivariate testing is the most reliable way to let customer behaviour choose between different versions of your value proposition, pricing, or messaging.

Most teams debate endlessly about which approach will work better. Should the headline focus on efficiency or cost savings? Does the premium pricing signal quality or create barriers? Which call-to-action will convert more prospects?

These debates waste time and rely on internal opinions rather than customer behaviour. Multivariate testing settles the argument with data.

What's a Multivariate test?

It's a way of comparing two or more versions of the same element; Control A against Variant B (and sometimes C, D, etc.) to determine which one performs better with real customers.

You randomly show different versions to different segments of your audience and measure which generates more of the behaviour you want: clicks, sign-ups, purchases, or engagement.

No opinions, no guessing, no internal politics. Just objective data about what actually motivates customer action.

How a logistics company discovered their £50M assumption was wrong

A major logistics company we work with was launching a new freight management platform for mid-market manufacturers. Internal stakeholders were divided on positioning: should they emphasise cost reduction or operational efficiency?

The sales team insisted customers cared most about reducing shipping costs. The product team thought efficiency and visibility were bigger pain points. Marketing wanted to lead with technology innovation. Operations thought reliability was the key differentiator.

Rather than letting internal opinions drive the decision, they ran a multivariate test across their target market.

The setup: Four different landing pages with identical functionality but different value propositions:

  • Version A (Control): "Reduce shipping costs by up to 30%"

  • Version B: "Gain complete visibility into your supply chain operations"

  • Version C: "AI-powered freight management for modern manufacturers"

  • Version D: "Never miss a delivery deadline again"

Each version targeted the same audience through identical ad campaigns, driving traffic randomly to each landing page. The goal was to measure which version generated the most demo requests from qualified prospects.

What they expected: The cost reduction message (Version A) would win based on sales team feedback.

What actually happened:

  • Version A (Cost focus): 2.3% conversion rate from visitor to demo request

  • Version B (Visibility focus): 4.7% conversion rate (more than double Version A)

  • Version C (AI/Technology focus): 1.8% conversion rate (lowest performance)

  • Version D (Reliability focus): 3.9% conversion rate (strong but not the winner)

They learned that mid-market manufacturers weren't primarily motivated by cost savings. Instead, they were frustrated by lack of visibility into their supply chain operations. They had budget for freight management but needed confidence that shipments would arrive as promised.

Based on these results, the team repositioned their entire go-to-market strategy:

  • Made supply chain visibility the core value proposition across all marketing

  • Restructured the product demo to lead with tracking and transparency features

  • Trained the sales team to ask about visibility challenges before discussing cost

  • Developed case studies focused on operational control rather than cost savings

The repositioned approach delivered:

  • 87% increase in qualified demo requests within 3 months

  • 34% improvement in demo-to-purchase conversion rates

  • £8.2M in new revenue within the first year

  • Clear competitive differentiation in messaging

Without multivariate testing, they'd have launched with cost-focused messaging that converted half as many prospects.

How to run Multivariate Testing properly

The power of multivariate testing lies in removing human bias from important decisions. Customer behaviour reveals preferences that surveys and interviews often miss.

1. Test one variable at a time

Don't change headline, image, and pricing simultaneously. Test one element so you know exactly what drove the performance difference.

2. Make variants meaningfully different

Testing "Sign up now" vs. "Get started today" won't teach you much. Test fundamentally different approaches: cost vs. efficiency, features vs. benefits, urgency vs. convenience.

3. Define success criteria upfront

Decide what constitutes a meaningful difference before seeing results. Is a 15% improvement worth implementing? 25%? Set the threshold in advance.

4. Wait for statistical significance

Don't stop the test early because you like the preliminary results. Run until you have enough data to be confident in the outcome (typically 95% confidence level).

5. Follow up with qualitative insights

After identifying the winner, interview people who converted to understand why that version resonated. The numbers tell you what happened; conversations reveal why.

Common mistakes to avoid:

  • Testing too many variables simultaneously

  • Stopping tests early based on preferences rather than statistical significance

  • Running tests without enough traffic to reach meaningful conclusions

  • Ignoring secondary metrics (focusing only on conversion while engagement drops)

  • Not following up to understand why one version won

Try this next week

Pick one key element in your current marketing or sales process where you're unsure which approach works better. Create two versions that test fundamentally different approaches. Show them randomly to prospects and measure which generates more of the behaviour you want.

You'll likely discover that customer preferences differ significantly from internal assumptions, and the data will settle debates that could otherwise drag on for months.

More articles

Explore more insights from our team to deepen your understanding of digital strategy and web development best practices.