Low Fidelity
Low Fidelity
Low Fidelity
Phantom Feature

Overview
A Phantom Feature test is one of the fastest ways we measure real customer interest before anything is built. Instead of relying on surveys or feedback, this experiment captures actual user behaviour by placing a call-to-action (CTA) button on a website that appears real—but, when clicked, leads to an error page. By tracking how many people attempt to access the feature or service, we gauge whether there’s actual demand. This method works particularly well when there is internal debate about whether a feature or proposition is worth pursuing. If customers actively try to engage with something that doesn’t yet exist, it’s a strong signal to move forward. If there’s little to no engagement, it tells us to rework the positioning—or drop the idea entirely.
A Phantom Feature test is one of the fastest ways we measure real customer interest before anything is built. Instead of relying on surveys or feedback, this experiment captures actual user behaviour by placing a call-to-action (CTA) button on a website that appears real—but, when clicked, leads to an error page. By tracking how many people attempt to access the feature or service, we gauge whether there’s actual demand. This method works particularly well when there is internal debate about whether a feature or proposition is worth pursuing. If customers actively try to engage with something that doesn’t yet exist, it’s a strong signal to move forward. If there’s little to no engagement, it tells us to rework the positioning—or drop the idea entirely.
A Phantom Feature test is one of the fastest ways we measure real customer interest before anything is built. Instead of relying on surveys or feedback, this experiment captures actual user behaviour by placing a call-to-action (CTA) button on a website that appears real—but, when clicked, leads to an error page. By tracking how many people attempt to access the feature or service, we gauge whether there’s actual demand. This method works particularly well when there is internal debate about whether a feature or proposition is worth pursuing. If customers actively try to engage with something that doesn’t yet exist, it’s a strong signal to move forward. If there’s little to no engagement, it tells us to rework the positioning—or drop the idea entirely.
Process
We start by identifying a feature, service, or product concept that needs validation. Rather than building it outright, we create a CTA that suggests it’s already available. This could be a button labelled “Try Now,” “Request Early Access,” or “Book a Demo.” Once the CTA is in place, we track how many users attempt to engage with it. If the test runs on an existing website, we compare click-through rates with other features to understand relative demand. We sometimes place a short message on the error page itself, inviting users to leave their email if they want to be notified when the feature becomes available. After collecting data over a set period—usually a week—we assess engagement levels. A high number of clicks signals strong demand, justifying further development. A lack of engagement suggests the idea may need refining or repositioning.
We start by identifying a feature, service, or product concept that needs validation. Rather than building it outright, we create a CTA that suggests it’s already available. This could be a button labelled “Try Now,” “Request Early Access,” or “Book a Demo.” Once the CTA is in place, we track how many users attempt to engage with it. If the test runs on an existing website, we compare click-through rates with other features to understand relative demand. We sometimes place a short message on the error page itself, inviting users to leave their email if they want to be notified when the feature becomes available. After collecting data over a set period—usually a week—we assess engagement levels. A high number of clicks signals strong demand, justifying further development. A lack of engagement suggests the idea may need refining or repositioning.
We start by identifying a feature, service, or product concept that needs validation. Rather than building it outright, we create a CTA that suggests it’s already available. This could be a button labelled “Try Now,” “Request Early Access,” or “Book a Demo.” Once the CTA is in place, we track how many users attempt to engage with it. If the test runs on an existing website, we compare click-through rates with other features to understand relative demand. We sometimes place a short message on the error page itself, inviting users to leave their email if they want to be notified when the feature becomes available. After collecting data over a set period—usually a week—we assess engagement levels. A high number of clicks signals strong demand, justifying further development. A lack of engagement suggests the idea may need refining or repositioning.
Requirements
To run this test effectively, we need a live website with sufficient traffic, access to analytics tools to track engagement, and a controlled timeframe to gather reliable data. Because the Phantom Feature test deliberately presents users with a dead-end, it’s important to remove the test quickly to avoid creating a frustrating experience. The best results come from pairing this test with additional customer feedback to explore why users did or didn’t engage.
To run this test effectively, we need a live website with sufficient traffic, access to analytics tools to track engagement, and a controlled timeframe to gather reliable data. Because the Phantom Feature test deliberately presents users with a dead-end, it’s important to remove the test quickly to avoid creating a frustrating experience. The best results come from pairing this test with additional customer feedback to explore why users did or didn’t engage.
To run this test effectively, we need a live website with sufficient traffic, access to analytics tools to track engagement, and a controlled timeframe to gather reliable data. Because the Phantom Feature test deliberately presents users with a dead-end, it’s important to remove the test quickly to avoid creating a frustrating experience. The best results come from pairing this test with additional customer feedback to explore why users did or didn’t engage.
Discover other experiments
Explore more real-world experiments that have helped teams validate ideas, reduce risk, and accelerate innovation.