The “Work of Fiction” Trap: When Metrics Lie
How well-intentioned teams build illusions of progress and how to focus on reality instead.
Early in my career consulting, while working with a client company, I observed something that stuck with me. One of the client's teams was praised internally for dramatically improving a key customer experience metric – think 'After Order Contact Rate' (AOCR), which ideally goes down when fewer customers have post-order problems. They'd cut it by half, a figure that drew significant praise within the company. While impressive on the surface, the sheer speed of the change raised a flag for me; deep-seated user problems rarely vanish overnight. That initial skepticism lingered.
Then came my own experience, a week later. My order, well past its delivery date, status mocking me: "Shipped". Time to contact support. Where was the button? Order History yielded nothing. The Help section was a maze. After minutes of increasingly angry clicking, I found the escape hatch: Account → Settings → Order Support → Contact Us. Buried three layers deep.
The support agent, once reached, casually confirmed it: a flood of complaints since the "new contact process" rolled out. Our app store reviews were brutal, dozens of fresh one-star ratings hammering the impossible search for help.
The team hadn't solved customer problems. They'd made it harder to report them. The metric looked stunning; the reality for customers was worse. They had created what I like to call a "Work of Fiction”.
What is “Work of Fiction”?
There are so many different ways in which “Work of Fiction” could manifest, but I’ve been able to notice that majority of the time it presents itself in one of the following 4 common forms:
Appears to solve a problem based on surface-level metrics.
Actually shifts the problem elsewhere, masks it, or creates new issues.
Often trades user experience for a number on a dashboard.
Ultimately serves the creator's narrative more than the user's reality.
These solutions tell a story of progress internally, while the actual user experience tells a different, often frustrating, story. When we celebrate these fictions, we incentivize looking good over being good.
This isn’t unique to e-commerce:
The appointment availability illusion: A healthcare network claims reduced wait times. The trick? Their booking system now simply says "No appointments available" once a threshold is hit, instead of showing the real, long wait. Patients can't even get on the list.
The marketing lead mirage: A marketing team hits its qualified leads target with a broad campaign. Sales drowns in unqualified leads, wasting time and closing fewer deals. Marketing "solved" lead generation by creating a problem for sales.
The Onboarding Velocity Trap: A subscription software company wants to increase its "Activation Rate" (users completing key initial steps). They drastically oversimplify onboarding, removing tutorials for features and skipping crucial settings just to get users through the flow faster.
Think about your own industry. Which metrics get celebrated? Could any recent "wins" have quietly made things worse for someone else?
Why do smart teams write fiction?
Very few teams truly intend to make things worse. Fictions usually arise from the system, not malice.
The Pressure Cooker: Quarterly goals demanding rapid, visible wins. Performance reviews tied to specific metrics. Internal competition for resources. Under pressure, the path of least resistance might be optimizing the metric, not the underlying reality. If hitting the after order contact rate target is all that matters, hiding the button becomes a grimly rational choice.
The Disconnection Dilemma: Strict team boundaries mean product teams often don't see the ripple effects of their decisions on customers or colleagues in support or sales. They rely on metrics that only tell part of the story, looking good on the surface while missing what truly matters. Data summaries, like averages or totals, hide individual user struggles. In this environment, questioning a metric 'win' can feel like rocking the boat. This disconnection breeds blindness, not bad intent. Sometimes, teams know they're gaming the system, usually in high-pressure cultures where failure isn't tolerated.
Spotting the fiction: Reading between the lines
Develop a nose for narrative illusions.
What are things to watch for?
Metric Miracles & Qualitative Disconnects: A key metric skyrockets, but user feedback (support tickets, reviews, social media, NPS comments) screams frustration. If the numbers look great but the mood is sour, dig deeper.
Shifting The Burden: One team's win coincides perfectly with another team's pain. Support requests drop, but email volume or social media complaints surge. Marketing leads increase, but sales conversion rates tank. Problems rarely vanish; they just move. Track related metrics together.
Requirement Theater: The solution technically meets the requirement but violates its intention. Asked to "make registration easier", a team removes validation, causing downstream account errors. If explaining precisely how the solution works feels uncomfortable or requires careful wording, it's probably violating the spirit.
Ignoring The Root Cause: The fix seems too easy, avoiding the messy root cause. It addresses a symptom, letting the real disease fester.
From fiction to reality: How to build real solutions
Escaping the fiction trap requires conscious effort:
Use Balanced Metrics (Kill Isolated Metrics): Never track a key metric in isolation. Pair it with a counter-metric or a qualitative measure.
Balance Efficiency with Effectiveness: Don't just aim to lower the Support contact rate. You must also track Issue resolution satisfaction to ensure that when customers do need help, their problems are actually solved effectively.
Balance Quantity with Quality: Tracking Feature adoption rate (how many people use it) isn't enough. Pair it with Task completion success rate for that feature to understand if people are actually finding it usable and successful.
Balance Speed with Overall Experience: While improving Page load speed is often beneficial, always monitor its impact on Conversion rate or User satisfaction to confirm that speed improvements haven't broken something or inadvertently harmed the user experience.
Balance Acquisition with Realized Value: Measuring Onboarding completion rate shows users are getting in, but pairing it with 30-day active usage reveals if they're staying and actually getting ongoing value from your product.
✅ Action: In your next metrics review, identify your top ones. Propose a balancing metric for each. Report them together, always.
Create Skin-in-the-Game & Embrace Qualitative Insights: Ensure teams feel the impact of their work.
Don't just map the customer journey; live it. Mandate regular time for using your own product to achieve real customer goals.
Have product managers spend time handling support tickets related to their features.
Present customer quotes and video clips alongside metric charts.
Don't just rely on quantitative data; use interviews, usability tests and feedback analysis to understand the why.
Distinguish Symptoms from Causes: Move beyond surface fixes.
For any metric-improving initiative, ask: "Why do customers have this problem in the first place?"
Use the "Five Whys" relentlessly to trace issues back to their roots.
✅ Action: Take your current top priority metric. Ask "Why does this metric matter?" five times. Is your solution aimed at the first answer or the fifth?
Redefine Success (For Leaders): Reward reality, not just appearances.
Celebrate teams that identify and solve root causes, even if it takes longer.
Praise honest look-backs that reveal hidden problems.
Focus rewards on the actual impact delivered (the outcome), not just on shipping features (the output). Ask 'did it solve the problem?' not just 'did we launch it?'
Review how metrics were improved, not just that they improved.
Foster Cross-Functional Accountability & Truth-Telling: Break down barriers between teams and make it safe to be honest.
Align teams like tech and customer support teams with shared goals focused on improving the overall customer experience or resolving user issues, not just team-specific metrics.
Create psychological safety. Can someone flag a metric as "too good to be true" without fear? Celebrate teams that identify unintended negative consequences of their own work.
Not all metric focus is fiction
Let's be clear: optimizing metrics isn't inherently bad. It works well when:
Metrics directly reflect genuine user benefit (e.g. faster load times leading to less waiting).
Solutions achieve the same outcome with less effort (true automation).
The easiest way to improve the metric is the best thing for the user.
The difference is alignment: In genuine solutions, the metric's story matches the user's reality. In fiction, they diverge.
Conclusion: Build reality, not fiction
Works of Fiction is dangerous because it corrupts our understanding of progress. We mistake movement for advancement, narrative for reality.
Product development isn't about telling compelling stories in slide decks; it's about building better realities for the people who use our products. This requires the courage to look beyond convenient numbers, to question our own successes and to prioritize lasting improvement over fleeting applause.
Your Next Steps:
Audit: Which of your key metrics could incentivize fiction?
Observe: When did you last silently watch a real customer use your product?
Question: For your recent "wins," how exactly did the metric improve? What might be the unseen cost?
Balance: What counter-metrics can prevent optimizing one thing at the expense of everything else?
Let's commit to building products grounded in truth.
Like what you’ve read here? Make sure to share RoadToPM with others ✨
So well explained! I'm amazed how well I could follow despite being someone who's not from the field. You did a great job writing this.
And also -- I'm in LOVE with the artwork!