[[TITLE]]
[[SUBTITLE]]
You can feel it when it happens. The room is supposed to be about patients, or students, or customers, or the product. Then the dashboard goes up, and suddenly the conversation narrows to one thing: the number. “We need the churn line under 5%.” “We’ve got to hit 10,000 steps.” “Our exam pass rate must be 95%.” The metric swallows the mission. People start doing smart-looking things that somehow feel small and wrong. That’s surrogation.
Surrogation is when we let a proxy—a score, KPI, or target—stand in for the real goal, and then we start managing the proxy as if it were the goal.
We’re the MetalHatsCats team. We’re building a Cognitive Biases app because we keep seeing teams like yours get tripped by stuff like this. Surrogation is sneaky, common, and fixable. Let’s make it visible and manageable.
What is Surrogation — When the Metric Becomes More Important Than the Goal and Why It Matters
Metrics are not evil. They’re tools. We make them because real goals are fuzzy. “Take care of patients” is noble but vague. “90% patient satisfaction” is crisp and tractable. So we adopt the metric and move on. That’s fine—until we forget the metric is not the goal.
Surrogation happens when the shorthand becomes the story. We stop asking, “Does this help the real goal?” and start asking, “Does this move the number?” The difference sounds small. In practice, it turns people and systems toward gaming, tunnel vision, and hollow wins.
It matters for a few concrete reasons:
- It degrades outcomes. Teams optimize what’s easy to measure, not what’s valuable.
- It punishes integrity. People who refuse to game the system look worse than those who do.
- It spreads workload in weird ways. We burn time polishing artifacts of the metric instead of serving the mission.
- It breaks trust. Stakeholders feel the gap between the story and their lived experience.
This isn’t just opinion. Organizational research has warned us for decades: reward A while hoping for B, and you’ll get A at the expense of B (Kerr, 1975). When a measure becomes a target, it ceases to be a good measure (Goodhart, 1975; Campbell, 1976). Managers can literally “forget” the richer strategy when a salient metric is present (Choi, Hecht, & Tayler, 2012). None of that means we should abandon metrics. It means we should design and use them like power tools: clearly, cautiously, and with guards on.
Examples
Stories are better than definitions because they catch the feeling of it. Here are a handful across domains.
Healthcare: Door-to-Doctor Time
A hospital sets a target: every patient should see a doctor within 30 minutes. Good goal. Patient experience and outcomes often improve when delays shrink.
At first, it works. Triage tightens up. Hallway consults fade.
Then the number slips. New policies emerge. Nurses are told to have the doctor “eyeball” patients in the waiting room and click the EHR field, even if the actual consult happens later. The metric stays green. Families still wait.
Did the system fail? No. It did exactly what it was told: maximize the recorded time-to-doctor. The measure became the mission, and patient care quietly moved to second place.
Education: Teaching to the Test
A district ties funding and teacher evaluations to standardized test scores. Scores go up. Celebration.
Inside classrooms, time shifts from debate, projects, and reading for joy to practice packets, keyword tricks, and narrow test formats. Kids learn less breadth. The weakest students get the most drill. They pass more often—but many graduate unready for anything beyond tests.
The teachers didn’t suddenly forget how to teach. They focused on the incentive and ignored the mission because that’s how the system told them to survive (Campbell, 1976). The metric got what it asked for and lost what it couldn’t see.
Software: Daily Active Users (DAU) Everywhere
A consumer app chases DAU. Product managers push streaks, notifications, and tiny rewards. DAU climbs. Investors smile.
Meanwhile, users grow numb. Support tickets about spammy pings rise. A rival launches a calmer, more respectful experience. The company’s churn creeps up. The team scrambles with new tricks to prop up DAU.
They optimized a proxy for product-market love. DAU wasn’t bad—it was partial. When it took over, it pulled the team away from the actual job: help a specific person solve a specific problem and feel good doing it.
Sales: Quarterly Quota That Shrinks the Future
A B2B sales team must hit quarterly revenue targets. Reps discount heavily at quarter-end and push customers to sign prematurely. Revenue looks great in Q2.
In Q3, upsells falter. Customers regret being rushed and churn early. The pipeline thins because long-term relationship work was paused at the end of Q2 to “close what’s close.”
Did the team underperform? Not against the metric. Against the mission—building healthy, lasting customer relationships—they starved themselves.
Manufacturing: Defects Down, Recalls Up
A factory ties bonuses to defect rate per line item. Defect reports drop. Leadership cheers.
A quality engineer notices the “rework room” is bursting. Operators are quietly rerouting flawed units to a rework path that doesn’t count as a defect. Scrappage costs rise. Later, a recall: a reworked part fails in the field.
The metric narrowed the definition of “quality” to “what the dashboard sees.” The mission—safe, reliable product—was a different thing.
Academia: Publication Count and the Perverse Incentives
You need 10 publications for tenure. Your lab slices your work thinner, churns salami-sliced papers, and aims for count over contribution. The university looks productive. You look prolific.
Five years later, no one can reproduce your flashy results. Your students learned questionable research practices. You trained them to chase metrics instead of truth.
No villain here. Just a system rewarding publication count “A” while hoping for scientific progress “B” (Kerr, 1975).
Personal Productivity: The Step Counter That Stole Your Walk
You buy a fitness tracker. It’s fun. Ten thousand steps is your daily lighthouse.
You jog in place at 11:45 pm because you’re 400 steps short. You take phone calls pacing your kitchen. You forget to do strength training and flexibility, though your back hurts.
The metric got you moving—great! Then it took over. Your body needed a mix. The goal was health. The metric was steps.
Startups: North Star Myopia
The company picks a North Star metric: weekly active teams. It’s tidy and motivational.
The growth team invests in “team activation hacks”: aggressive invites, pre-filled teams, trial teams seeded with bots that look like teammates. On paper, weekly active teams soar.
Then you notice your “active teams” are ghost towns. Users don’t return. Enterprise buyers don’t convert. You tell yourself it’s “a lag.”
It’s surrogation. The North Star stopped being a star and turned into a laser pointer. It showed you where to stare, not what matters.
Public Policy: Crime Statistics and the Streetlight
A city measures “reported incidents.” Police are encouraged to keep them down. Certain crimes get under-classified. Reports are discouraged. On paper, crime falls.
Residents feel less safe. Trust erodes. Real harm increases, unmeasured.
We keep walking under the bright circle where the data is clean. The dark street might be where the truth lives.
How to Recognize and Avoid It
Surrogation isn’t a moral failing. It’s a human move: we like clear edges. But we can design for it. Here’s a practical approach we’ve used with teams.
Recognize the Early Feel
Before a checklist, trust the itch:
- People say “We can’t do that; it’ll hurt the metric” without explaining how it hurts the mission.
- Meetings pivot to red/green dashboard debates, not customer, patient, or user stories.
- Folks who raise “but is this better for the user?” sound naive or difficult.
- Wins feel hollow. You moved the number, but something smells off.
That suspicion is worth naming out loud. “I think we’re surrogating—has the metric replaced the mission here?” Normalize that sentence.
Design Metrics with Escape Hatches
Metrics are necessary. Make them less brittle.
- Use a portfolio of measures, not a shrine to one. Blend outcome (did the world improve?), behavior (what we did), and quality (what trade-offs we accepted).
- Include measures of mission health that are hard to game. In healthcare, match time-based targets with readmission rates or safety markers. In product, pair DAU with retention and satisfaction for new users and core users separately.
- Write down what you will not trade. “We will not sacrifice clinician judgment to hit throughput.” “We will not notify users more than once a day.”
- Make your metric review cadenced and reversible. Quarterly skin-shedding prevents calcification.
Keep the Mission Visible in the Room
Surrogation thrives when the mission is vague or out of sight.
- Start meetings with a human story. One email, one patient, one student outcome. It anchors the metric conversation to reality.
- Put the text of the mission where you can point to it. “Today we serve X by doing Y.”
- Ask a ritual question before deciding: “If we had no dashboard, would we still choose this?”
Build Guardrails Against Gaming
People don’t game because they’re bad. They game because we force them into corners. Reduce the corners.
- Tie incentives to families of metrics, not single numbers. Balanced pressure makes gaming less rewarding.
- Reward integrity explicitly. Celebrate when someone shows the metric is wrong or incomplete.
- Separate learning spaces from ranking spaces. Pilot metrics in “no consequence” sandboxes before attaching pay or status.
Audit Your Metrics Like Code
- Version your metrics. When did “active user” change? What’s v3 compared to v2? Write it.
- Test for adversarial behavior. If an enemy tried to maximize this number while hurting the mission, what would they do? Did you just describe your plan?
- Sample and shadow. Pick random cases and trace the end-to-end reality, alongside the numbers. Does the story match the chart?
Coach “Metric Fluency” in Teams
Make metric literacy routine, not precious.
- Teach new hires how metrics are made, where they break, and how to push back.
- Build a “red team” rotation: every quarter, a few people argue how the metric could mislead us.
- Invite front-line folks to critique metrics. They see the workarounds first.
Know When to Kill a Metric
Some metrics do their job and then become wrong. Treat them like tools with a life cycle.
- When a metric has been the top priority for more than two quarters, schedule a retirement/rotation check.
- Kill metrics that have stopped correlating with your mission outcomes.
- Declare “mission days” where the dashboard is off. Call customers, visit classrooms, walk the floor.
The Checklist
Use this when you’re about to commit to a target or realize one is running your life.
- Write the mission in one sentence. Does the metric directly serve it?
- List three obvious ways to game the metric. Do any look like your current plan?
- Identify one stakeholder who could be hurt if you over-optimized this metric. How would you notice?
- Add a balancing measure that would catch collateral damage.
- Define a minimum standard for quality you will not trade away.
- Decide who can veto metric-driven decisions on mission grounds.
- Set a review date and a kill-switch condition.
- Run a shadow audit: pick five cases and check if the metric tells the truth about them.
- Put one real story on the agenda of every metrics meeting.
- Commit to a “metric sabbath” once a quarter—operate for a day without looking, then notice what changed.
Related or Confusable Ideas
Surrogation sits in a family of traps and laws. It’s useful to keep the borders crisp.
- Goodhart’s Law: “When a measure becomes a target, it ceases to be a good measure.” This captures what happens when adversaries—or just clever humans—optimize the measure. Surrogation is the psychological version: we stop thinking about the goal altogether (Goodhart, 1975).
- Campbell’s Law: The more a quantitative measure is used for decisions, the more it becomes subject to corruption and distorts the process. This highlights unintended consequences in social settings like education (Campbell, 1976).
- The McNamara Fallacy: Overreliance on quantification leads people to ignore what can’t be measured. Surrogation often rides this fallacy: “If it’s not on the dashboard, it’s not real.”
- Proxy Myopia: Focusing narrowly on a proxy variable while ignoring the system around it. Surrogation is a form of proxy myopia; the proxy becomes the point.
- Goal Displacement: A broader term where the means become the ends. Surrogation is goal displacement via metrics specifically.
- Hitting the Target, Missing the Point: The practical signature of surrogation. You’re green on KPIs and red in life.
- Vanity Metrics vs. Actionable Metrics: Vanity metrics look impressive but don’t guide decisions. Surrogation can elevate vanity metrics—page views, downloads, “users”—over the hard ones like retention or NPS from paying customers.
- Moral Hazard: When people take risks because someone else bears the cost. Surrogation can create moral hazard if teams boost numbers while externalizing harm.
- Streetlight Effect: Searching where the light is. Metrics put light in convenient spots. Surrogation keeps you under the lamp.
If you want a compact, sharp critique of metric misuse in general, read The Tyranny of Metrics (Muller, 2018). If you want the lab version showing how managers forget strategy when metrics are salient, look up “Lost in Translation?” (Choi, Hecht, & Tayler, 2012). Both are short and bracing.
FAQ
Q: How do I tell my boss the target is wrong without sounding like a whiner? A: Tie your critique to the mission and bring data. “Our mission is X. This metric pushes us to do Y, which helps the number but hurts X. Here are three examples and an alternative metric/balancing measure.” Invite a trial: “Let’s run a two-week A/B where we track both and review outcomes.”
Q: Is surrogation always bad? Aren’t metrics necessary? A: Metrics are essential. Surrogation is bad when the metric erases judgment. The fix isn’t fewer numbers; it’s better-designed metrics, regular reviews, and a team culture that keeps the mission in the conversation.
Q: What’s a quick first step if I suspect we’re surrogating? A: Run a shadow case audit. Take five recent decisions driven by the metric, and study the real-world outcomes. If the stories don’t match the chart, you’ve got a live issue. Then add a balancing measure and schedule a metric review.
Q: How do I build incentives that don’t create gaming? A: Bundle incentives across a small set of complementary metrics. For example, pay customer support bonuses on resolved issues, customer satisfaction, and recontact rate together. Add a quality floor: if any metric drops below a threshold, no bonus unlocks.
Q: Our North Star metric is beloved. How do we avoid myopia? A: Keep the North Star, but add constellations. Pair it with two to three guard metrics, declare “banned trade-offs,” and revisit definitions quarterly. Also, rotate a team to challenge the North Star—what would we miss if we worship it?
Q: What about personal goals like steps or word counts? A: Use them as prompts, not judges. Rotate the metric weekly, e.g., steps this week, sleep next week, strength the next. Add a weekly reflection: “Did this metric serve my health or hijack it?” If it hijacked, change it.
Q: How can I teach my team to spot surrogation early? A: Run a mini-workshop with three real cases from your org. Ask: what was the goal, what was the metric, how did behavior shift, and what harm or value showed up? Then draft your “surrogation checklist” together and bake it into planning.
Q: Are OKRs immune to surrogation? A: No. OKRs can even concentrate it. Keep Objectives human and qualitative; make Key Results plural and mixed (leading and lagging). At check-ins, ask explicitly: “Did chasing this KR make anything worse that the Objective forbids?”
Q: How many metrics are too many? A: Enough to cover the mission, not so many that no one can remember them. A healthy suite has one to two core outcomes, two to three behavior/process metrics, and one to two quality/guardrails. If your team can’t recite them, you have too many.
Q: What do I do when the board or regulators force a bad metric? A: Comply and buffer. Add internal guard metrics, document unintended effects, and report them. Where possible, advocate for revisions with evidence. Meanwhile, protect your mission locally: build policies that prevent harmful trade-offs, even if they cost you on the forced metric.
The Surrogation Checklist (Stick-on-the-wall Edition)
- State the mission in one sentence we all agree on.
- Write the metric and its exact definition (what counts, what doesn’t).
- List three ways someone could hit the metric while hurting the mission.
- Add at least one balancing measure to catch those harms.
- Set a quality floor we will not cross for any green KPI.
- Define who can veto metric-chasing on mission grounds.
- Schedule a review date and a kill condition in advance.
- Pilot in a sandbox before attaching pay or status.
- Do a five-case story audit monthly; share one case in the team meeting.
- Take a quarterly metric sabbath; operate a day without the dashboard and debrief.
Wrap-up
Every metric starts as a promise. “We’ll use this as a flashlight.” If you’re not careful, the flashlight becomes your sun. You end up building a world that looks beautiful in the beam and dead in the corners.
The fix is not heroic. It’s a habit: remind yourself of the mission, diversify your measures, design guardrails, and keep real stories in the room. It’s a willingness to say, “This number is lying to us,” and the humility to change it. It’s leaders who reward candor over cosmetics and teams that practice metric literacy like they practice code review.
Surrogation isn’t rare. It’s everywhere we’ve ever worked. It’s in schools trying to help kids and in startups trying to help users. It’s in your step counter and your sprint board. But once you see it, you can name it, and once you can name it, you can manage it.
We’re building a Cognitive Biases app because catching patterns like this early saves teams months of drift and people years of frustration. If your metric has started eating your mission, you’re not stuck. You’re one honest conversation and a fresh checklist away from better work.
References: Kerr (1975); Goodhart (1975); Campbell (1976); Choi, Hecht, & Tayler (2012); Muller (2018).

Cognitive Biases — #1 place to explore & learn
Discover 160+ biases with clear definitions, examples, and minimization tips. We are evolving this app to help people make better decisions every day.
People also ask
What is this bias in simple terms?
Related Biases
Systematic Bias – when errors in judgment follow a predictable pattern
Do you think your decisions are objective, but they follow a recurring pattern of errors? That’s Sys…
Mere Exposure Effect – when you like something just because it’s familiar
Have you noticed that the more you see something, the more you like it? That’s Mere Exposure Effect …
Teleological Bias – when you see purpose where there is none
Do you believe everything happens for a reason? That’s Teleological Bias – the tendency to assign pu…