Low Fidelity
Low Fidelity
Low Fidelity
Mental Mapping

Overview
At Future Foundry, we use Mental Mapping to uncover how customers think about and categorise different elements of a venture, product, service, or experience. Whether we’re designing a new proposition, refining messaging, or structuring a digital interface, this experiment gives us insight into how customers naturally group and prioritise information. By working directly with real users, we see not only what they expect but also where misalignment might occur. If a product feature or concept doesn’t fit where they naturally place it, it’s a sign that the messaging or structure needs adjusting. This is a low-fidelity experiment, meaning it’s best used at the start of an engagement to shape later, higher-fidelity validation tests.
At Future Foundry, we use Mental Mapping to uncover how customers think about and categorise different elements of a venture, product, service, or experience. Whether we’re designing a new proposition, refining messaging, or structuring a digital interface, this experiment gives us insight into how customers naturally group and prioritise information. By working directly with real users, we see not only what they expect but also where misalignment might occur. If a product feature or concept doesn’t fit where they naturally place it, it’s a sign that the messaging or structure needs adjusting. This is a low-fidelity experiment, meaning it’s best used at the start of an engagement to shape later, higher-fidelity validation tests.
At Future Foundry, we use Mental Mapping to uncover how customers think about and categorise different elements of a venture, product, service, or experience. Whether we’re designing a new proposition, refining messaging, or structuring a digital interface, this experiment gives us insight into how customers naturally group and prioritise information. By working directly with real users, we see not only what they expect but also where misalignment might occur. If a product feature or concept doesn’t fit where they naturally place it, it’s a sign that the messaging or structure needs adjusting. This is a low-fidelity experiment, meaning it’s best used at the start of an engagement to shape later, higher-fidelity validation tests.
Process
We start by defining the scope of what we’re testing. If we’re validating a new venture’s messaging, we’ll focus on key value propositions, pain points, and customer benefits. If it’s a digital experience, we’ll test how users expect to navigate and categorise information. We recruit a group of target customers—typically 15 to 20 participants—who closely match the ideal audience. Each participant is given a set of predefined category cards along with blank cards so they can create their own labels. During the session, we guide them through a structured mapping exercise, asking them to explain their thought process as they arrange the cards. This not only helps us see where things fit naturally but also exposes any confusion or inconsistencies. After running multiple sessions, we analyse the results, looking for recurring patterns in how customers group information. These insights help us refine how a product, service, or experience is presented—whether that means adjusting navigation in a digital product, repositioning a value proposition, or rethinking feature prioritisation.
We start by defining the scope of what we’re testing. If we’re validating a new venture’s messaging, we’ll focus on key value propositions, pain points, and customer benefits. If it’s a digital experience, we’ll test how users expect to navigate and categorise information. We recruit a group of target customers—typically 15 to 20 participants—who closely match the ideal audience. Each participant is given a set of predefined category cards along with blank cards so they can create their own labels. During the session, we guide them through a structured mapping exercise, asking them to explain their thought process as they arrange the cards. This not only helps us see where things fit naturally but also exposes any confusion or inconsistencies. After running multiple sessions, we analyse the results, looking for recurring patterns in how customers group information. These insights help us refine how a product, service, or experience is presented—whether that means adjusting navigation in a digital product, repositioning a value proposition, or rethinking feature prioritisation.
We start by defining the scope of what we’re testing. If we’re validating a new venture’s messaging, we’ll focus on key value propositions, pain points, and customer benefits. If it’s a digital experience, we’ll test how users expect to navigate and categorise information. We recruit a group of target customers—typically 15 to 20 participants—who closely match the ideal audience. Each participant is given a set of predefined category cards along with blank cards so they can create their own labels. During the session, we guide them through a structured mapping exercise, asking them to explain their thought process as they arrange the cards. This not only helps us see where things fit naturally but also exposes any confusion or inconsistencies. After running multiple sessions, we analyse the results, looking for recurring patterns in how customers group information. These insights help us refine how a product, service, or experience is presented—whether that means adjusting navigation in a digital product, repositioning a value proposition, or rethinking feature prioritisation.
Requirements
Running a successful Mental Mapping experiment requires access to engaged participants from the target audience, a clear focus on what’s being tested, and either a physical setup (printed cards and a workspace) or a digital environment using a tool like Miro or OptimalSort. The findings from this test don’t provide hard validation but serve as an essential first step in structuring an idea in a way that makes sense to customers.
Running a successful Mental Mapping experiment requires access to engaged participants from the target audience, a clear focus on what’s being tested, and either a physical setup (printed cards and a workspace) or a digital environment using a tool like Miro or OptimalSort. The findings from this test don’t provide hard validation but serve as an essential first step in structuring an idea in a way that makes sense to customers.
Running a successful Mental Mapping experiment requires access to engaged participants from the target audience, a clear focus on what’s being tested, and either a physical setup (printed cards and a workspace) or a digital environment using a tool like Miro or OptimalSort. The findings from this test don’t provide hard validation but serve as an essential first step in structuring an idea in a way that makes sense to customers.
Discover other experiments
Explore more real-world experiments that have helped teams validate ideas, reduce risk, and accelerate innovation.