Why you should test manually before you automate
Jacob Dutton
14 Aug 2025

The Service Simulation test is one of the most underused validation methods in corporate innovation, and the one that gives you the strongest evidence about whether your service proposition actually works.
Most teams rely on surveys, focus groups, and customer interviews to validate ideas. They ask "Would you buy this?" and "How much would you pay?" Then they build based on what people say they want.
But there's a massive gap between what customers say in a survey or interview and what they actually do in the real world.
What's a Service Simulation test?
A Service Simulation test involves creating a customer experience and delivering the value manually. But unlike other experiments (like the Behind the Scenes test) where you hide the manual work, customers know they're working with humans.
So, it's not about tricking anyone; you just want to understand what it really takes to solve customer problems before you automate anything.
How a hotel group saved £5.5M by testing manually first
A major hotel chain we work with wanted to build an AI-powered concierge service. They wanted to incorproate intelligent guest recommendations, automated booking assistance, 24/7 problem resolution, and personalised experience management on-property.
The tech team estimated £6M development cost and an18-month engineering timeline. Leadership was ready to approve the full build based on a succession of guest survey responses about wanting "more personalised service."
Instead of building the AI-first experience, we encouraged them to build a simple chat interface first where real team members handled guest requests manually. Every interaction was tracked (response times, resolution rates, guest satisfaction) and they documented exactly what steps were needed for each type of request.
The data showed something unexpected; 70% of guest requests were for basic information like WiFi passwords and directions. Only 15% actually needed "intelligent" recommendations. Also, guests valued speed over sophistication, and many supposedly "AI-worthy" requests still required human judgment anyway.
After 6 weeks, they got a much better understanding of what actually needed automation versus what worked better with humans in the loop. They ended up building a simple chatbot for basic info and invested in additional training for more complex requests. Total cost: £200K and a much better guest experience than the original £6M AI vision.
The Service Simulation test gives you three types of evidence that are impossible to get any other way:
Real customer behaviour at the point of value delivery
A firsthand understanding of what it takes to deliver that value profitably
Concrete data on operational requirements before you automate anything.
Unlike surveys or interviews, customers aren't telling you what they might do; they're showing you what they actually do when it matters most.
How to run a Service Simulation test
1. Start with one specific customer problem
Not "guests need better service" but "guests wait 15 minutes for basic information like WiFi passwords." The more specific and painful the problem, the clearer your success criteria.
2. Design the manual delivery process
Map out exactly how you'd solve that problem manually, step by step. What information do you need? How long does each step take? What could go wrong?
3. Set clear success metrics
Define what good looks like: customer satisfaction scores, completion times, willingness to pay. Without clear metrics, you can't prove you've solved anything.
4. Start small and transparent
Test with 5-10 customers who know they're getting manual service. Focus on learning, not impressing. You're proving the value exists before you automate it.
5. Document everything ruthlessly
Track every step, every delay, every customer reaction. This operational reality is what tells you what to build, what to automate, and what to keep manual.
Try this next week
Take a current service project you're working on. Instead of building the automated version, design how you'd deliver the same value manually for 5-10 customers. Focus on one specific use case, document every step and charge real money if possible.
You'll find out whether customers actually value what you're planning to build and what it really takes to deliver that value profitably.
More articles
Explore more insights from our team to deepen your understanding of digital strategy and web development best practices.