[[TITLE]]

[[SUBTITLE]]

Published Updated By MetalHatsCats Team

You’re at a planning meeting. The team lays out the quarter: “We’ll launch feature A (70% likely), refactor the API (60%), run the reactivation campaign (65%), and ship a new onboarding flow (75%).” Heads nod. It all sounds feasible. But then you glance at the quarter and think… wait. If each is “likely,” why does the quarter feel impossible? Because your brain is smuggling in a quiet arithmetic mistake: you’re treating the parts as more probable than the whole.

That’s the Subadditivity Effect: when people judge components to be more probable in total than the broader, combined event they belong to. The “unpacked” parts win our attention, and the “packed” whole fades, even though they describe the same reality.

We’re the MetalHatsCats team. We build tools that help people see their own thinking more clearly—including our upcoming Cognitive Biases app—because we’ve felt this bias wreck roadmaps, budgets, and bets. Let’s unpack subadditivity, show where it ambushes your work, and give you ways to catch it in the act—without turning your brain into a spreadsheet.

What is Subadditivity Effect – when parts seem more probable than the whole and why it matters

Subadditivity lives in a simple trap: if you “unpack” an event into detailed, vivid parts, people assign a higher combined probability to those parts than to the same event phrased broadly. If you ask, “What’s the chance I’ll be late tomorrow?” you might hear 20%. But if you unpack: “traffic,” “kid meltdown,” “urgent Slack DM,” “subway delay,” people will assign probabilities to each; add them and you might get 40% or 50%. It feels like better thinking because it’s more detailed. It’s actually biased thinking because probability should add up the same regardless of phrasing (Tversky & Koehler, 1994).

Why it matters:

  • It bloats risk estimates when you enumerate failure modes and then “add them up” in your gut.
  • It inflates opportunity estimates when you list growth channels and assume they’re all “likely-ish.”
  • It warps budgets and timelines because the same set of outcomes looks more feasible or more dangerous depending on whether you pack or unpack it.
  • It makes forecasts sound “confident” when they’re just over-explained.

In short: subadditivity turns detail into a mirage. The details feel like signal. Sometimes they are. But they also push your mind to over-allocate probability to the “parts,” starving the “whole.”

Examples (stories or cases)

1) Product roadmaps: the quarter that always slips

At a startup we worked with, the PMs did weekly “confidence ratings.” Each initiative got a number. Individually, they seemed reasonable: 0.6 here, 0.7 there. But if the quarter’s goal depended on three of those initiatives landing, the actual chance of the goal was far lower than the “average” confidence implied.

Subadditivity appeared when the team unpacked obstacles by feature: “Integration might slip because of vendor approval (30%). The migration could slip due to test flakiness (25%). The launch might slip due to legal review (20%).” Add those gut numbers and your mind says, “We’re at 75% risk of slip.” Meanwhile, the “whole” question—“What’s the chance we hit the quarter?”—gets a number like 50%, which is inconsistent with their own parts. The parts have eaten the whole.

The fix that worked: a single additivity pass. The PM would list all major initiatives, ask for the probability of finishing each by the deadline, and then compute the probability that the quarter’s composite OKR would be met, assuming reasonable dependency structure. That number was always lower than the meeting’s vibe. It prevented a “surprise” miss three quarters in a row.

2) Marketing funnels: channel optimism

A marketing lead split growth into buckets: SEO, paid social, email, referral, and PR. For each, the team built a plan. Projections, timelines, owners. The PMM said, “We have five chances to hit our signup target.” Everyone felt safer. But those “five chances” weren’t independent and they were already included in the overall growth number. Unpacking copies the hope five times; your brain adds it five times.

The whole question was “What’s the probability of hitting 10k signups this month?” Unpacking into five channels inflated the perceived total probability because each channel felt like an additional shot, not just a route to the same outcome. Unsurprisingly, they missed and blamed “channel variance.” The truth: they overweighted the parts and underweighted shared constraints like brand awareness and product-market fit.

A better pattern: forecast top-line signups first, then apportion to channels. Don’t forecast channels independently and sum. Treat the total as a pie. The slices must fit inside.

3) Security risk assessments: the scary list

Security reviews often enumerate threats: phishing, credential stuffing, insider misuse, supply-chain compromise, vulnerability exploitation. Each comes with a risk score. Executives read the list and feel doomed. The combined numbers exceed 100% in their heads. They buy three tools. Risk shrinks on paper.

Then a breach happens—but not one they bought the tools for. What went wrong? Unpacked threat lists amplify perceived risk by double-counting events and over-allocating probability to specific, vivid threats. The “whole” risk—breach this year—was stable; the unpacked story bent judgment toward what sounded concrete and urgent (availability bias piggybacks here).

Teams improve when they tie threats to a single annualized loss expectancy (ALE) for “breach of sensitive data,” then treat sub-threats as pathways, not independent events. The numbers stop ballooning; the budget goes where it actually reduces whole risk.

4) Medical decision-making: symptom grids

A patient asks, “What’s the chance this headache signals something serious?” Doctor says, “Low.” Later, a site lists “tumor,” “aneurysm,” “stroke warning,” “medication side effect,” “infection.” Each risk is “rare but possible.” The patient sums the fear. It feels higher than “low.”

In clinical reasoning, unpacking differentials can improve diagnosis but worsen patient probability judgment. A compassionate move is to anchor the whole probability first (“less than 1% based on your presentation and age”), then explain pathways inside that bounded whole. That fends off subadditivity-fueled anxiety without hiding possibilities.

5) Betting and forecasting: same game, more “ways to win”

Sports bettors sometimes overestimate a team’s chance to win after listing “paths to victory”: fast break dominance, hot three-point shooting, opponent foul trouble, bench energy. Each path seems plausible; stacked together, they make the team feel safer. But those paths overlap and aren’t independent. Similarly, political forecasters who unpack “paths to 270” can be misled unless they tie those paths back to correlated state probabilities. Good models enforce additivity; our brains don’t.

6) Legal strategy: motion stacking

A litigator plans to file four motions pre-trial. Each has a “reasonable chance.” The team feels protected. But the “whole” question—“Will the client avoid trial?”—didn’t change. Listing more motions can inflate confidence that “something will work.” It also creates subadditivity with costs. The team ends up over-filing, burning time and goodwill.

Lawyers who budget outcomes first—settle, dismiss, go to trial—set clearer expectations and allocate effort to the highest leverage path. Motions become means, not a self-inflating probability buffet.

7) Personal finance: income streams fantasy

The “multiple income streams” meme often triggers subadditivity. People list five side hustles and feel rich. Each has a “maybe” attached. Together, the “maybe” morphs into “likely” income. Reality disagrees. These streams share constraints: time, attention, market demand. The whole earning capacity is the ceiling; unpacked lines don’t multiply it.

Better approach: set a whole target (e.g., # The Sum That Lies: Why Parts Often Look More Likely Than the Whole

8) Incident postmortems: too many causes, too much certainty

After an outage, teams list causes: config drift, noisy canary, stale feature flag, failed rollback script. Each earns a percentage of blame. If you add these “shares,” you sometimes pass 100% because each cause feels “enough to explain the outage.” That’s subadditivity in disguise—root cause lists make single events look overdetermined.

A more disciplined story: define the minimal cut set—the smallest combination of factors sufficient for the outage—and estimate the probability of that set recurring. Don’t let the parts bid against each other with numbers that inflate the whole.

9) Hiring forecasts: pooled optimism

A hiring manager says, “We have three strong candidates; each has a 50% chance of accepting. So we’re at 150%.” Jokes aside, people behave as if that’s true. They stop sourcing. Dependencies—competing offers, compensation constraints, timing—correlate outcomes. The whole probability of filling the role this month isn’t the sum of the candidates’ acceptance probabilities; it’s constrained by the funnel and market.

A practical tactic: forecast the filled-by date first using historical acceptance rates and stage conversion data, then treat candidate-specific probabilities as moves within that bound.

10) Creative projects: more beats, less chance

In storytelling and game design, teams list “hooks” and “beats” to ensure engagement. The more they list, the safer the story feels. But audience attention is finite. Subadditivity pushes creators to over-stuff, believing many plausible hooks make success more likely. Usually, they crowd each other. The whole effect—clarity, emotion—erodes.

Strong editors ask, “What’s the one beat we must land?” They guard the whole against the lure of parts.

How to recognize/avoid it

Subadditivity thrives when we:

  • Feel safer with detail than with trade-offs.
  • Treat “paths” as independent when they share bottlenecks.
  • Use “maybe” numbers without forcing them into a single, additive frame.

You don’t need fancy math to disarm it. You need a few simple habits.

Habit 1: Size the whole before splitting the pie

Start with the broad event and commit to a number or range: “Chance we launch by June 30: 35–50%.” Only then break out components. Every sub-estimate must reconcile with the whole. If the sum of parts makes the whole look inconsistent, adjust the parts, not the anchor.

This “whole-first” anchor keeps the unpacked story inside reality.

Habit 2: Force additivity checks

After you estimate components, run a quick additivity pass:

  • Are we double-counting overlapping risks?
  • If we combined all “failure modes,” do we exceed 100% chance of failure?
  • If we list multiple “ways to win,” are they actually disjoint? If not, stop summing.

Write the whole and parts side by side. Call out conflicts explicitly.

Habit 3: Use structured partitions, not ad hoc lists

Subadditivity loves sloppy partitions. Use mutually exclusive and collectively exhaustive (MECE) buckets when you can. For example, categorize delays as “internal,” “external,” and “unknown,” where items only fit in one. MECE blocks overlap-driven inflation.

If you can’t be strictly MECE, at least mark overlaps clearly (“shared with external review”).

Habit 4: Normalize to a 100% pie

When estimating multiple outcomes of the same event (e.g., project finishes early/on time/late), require the probabilities to sum to 100%. Use a grid. It feels painfully simple because it works. The pie constrains the parts.

Habit 5: Separate likelihood from impact

Subadditivity worsens when you blend “how likely” with “how scary.” Keep a two-column view: probability and impact. A string of scary but unlikely events can look like the end of the world. Make the low likelihood explicit; weight your decisions by expected value.

Habit 6: Keep correlation in view

Ask, “If this part happens, does it make other parts more or less likely?” Correlation shrinks the useful independence of your list. When parts move together, the combined probability grows slower than the sum. Note dependency arrows; it will cool your numbers.

Habit 7: Calibrate with base rates

Before assigning numbers to detailed stories, pull base rates. “What percent of similar projects shipped in 90 days?” “What’s the annual breach probability in our industry?” Base rates anchor the whole; details then tweak up or down. Without base rates, details run the show.

Habit 8: Practice probability elicitation

Turn vibes into numbers, then grade yourself. Use Brier scores to measure forecast accuracy. Keep a private log. Over a few months, you’ll see where you inflate the parts. Calibration training reduces subadditivity by building respect for the 0–1 line.

Habit 9: Limit granularity in early stages

Early on, coarse categories beat fine detail. Too much unpacking creates pseudo-precision and raises subadditivity risk. Start broad, refine only when the decision demands it.

Habit 10: Ask the counter-question

When someone lists five ways we might fail, ask, “What’s the single number for ‘we miss the deadline’?” When someone lists four channels to hit target, ask, “What’s our total probability of hitting target?” Drag the brain back to the whole.

A quick checklist (stick this on your monitor)

  • Start with the whole probability; write it down.
  • Partition events MECE when possible.
  • Force parts to live inside the 100% pie.
  • Mark dependencies and overlaps explicitly.
  • Use base rates to anchor; adjust with details.
  • Don’t sum non-disjoint “ways to win/fail.”
  • Calibrate forecasts; review Brier scores monthly.
  • Prefer coarse estimates early; add detail only as needed.
  • Run an additivity pass before committing.
  • If it sounds more probable because it’s more detailed, pause.

Related or confusable ideas

Subadditivity sits in a neighborhood of biases and effects. It’s easy to mix them up.

  • Conjunction fallacy: People judge A and B together as more probable than A alone (classic “Linda is a bank teller and active in the feminist movement”). Subadditivity is different: it’s about over-assigning probability to unpacked descriptions of the same event. Conjunction is about wrong ordering; subadditivity is about wrong adding.
  • Partition dependence: Probabilities shift depending on how you divide the outcome space. Finer partitions grab more attention, raising their assigned probabilities (Fox & Rottenstreich, 2003). Partition dependence is a key mechanism behind subadditivity.
  • Support theory: The formal account that unpacked descriptions receive more “support,” inflating perceived probability relative to packed descriptions (Tversky & Koehler, 1994). If you want the theory frame, this is the backbone.
  • Availability bias: Vivid, recent, or emotionally charged details feel more likely. Availability piggybacks on subadditivity, making the unpacked story not just more numerous but more sticky (Tversky & Kahneman, 1973).
  • Planning fallacy: We underestimate completion times, even when we know better (Kahneman & Tversky, 1979). Subadditivity can feed it: teams unpack the “tasks” and feel each is manageable, overweighting the parts.
  • Disjunction effect: People avoid choosing when outcomes are uncertain, even if choice should be the same across outcomes. Different from subadditivity, but both exploit how we handle uncertainty in fragments.
  • Denominator neglect: Focusing on numerators over denominators, like seeing “10 winning tickets” and ignoring the total pool. It amplifies subadditivity when we count ways-to-win but forget the size of the whole event space.
  • Overconfidence: Narrow intervals and high subjective certainty. Subadditivity can masquerade as confidence by turning detail into “evidence.” The fix is similar: calibration and additivity checks.

If you’re sorting vocabulary for your team, here’s the quick mapping: subadditivity is about inflated totals when unpacked; conjunction fallacy is about wrong comparisons; partition dependence is the mechanism; support theory explains why unpacking adds “weight.”

Why the brain does this (short, practical answer)

We’re storytellers with calculators bolted on. Details sell a story. When you unpack an event, you create multiple hooks for your attention. Each hook convinces you there are “more ways” for the event to happen. The mind assigns a little probability to each hook and then forgets to take away that same amount from the “none of the above” bucket. We inflate the parts and starve the whole.

Research backs this. Support theory shows that descriptions with more detail or specificity receive higher judged probabilities because they feel more “supported” by evidence, even when the evidence is just elaboration (Tversky & Koehler, 1994). People also anchor on the unpacked items and fail to adjust down for overlap and correlation (Fox & Tversky, 1998). The fix isn’t to ban detail; it’s to fence it with structure.

A field guide: catching subadditivity in your daily work

In meetings

When someone says, “There are three ways this can succeed,” ask, “Are those disjoint? If not, what’s the total chance of success?” Then pause. Don’t rush to fill the silence. The brain needs a second to swap story mode for probability mode.

When a risk register grows by five items and the team gets nervous, ask, “What’s the overall risk level before and after this list? Did it actually change?” If the answer is “it feels scarier,” you’ve seen subadditivity inflate fear.

In docs and dashboards

On roadmaps, require a “whole probability” column at the epic level. If you list specific risks or dependencies, include an “overlap with other risks” note. Force scoping documents to end with a single “On-time delivery: X–Y%” estimate and a bold line: “Parts must sum under the whole.”

On risk dashboards, visualize a 100% bar and fill slices for each pathway. This physical constraint will trigger corrections when people try to add too much.

In forecasts and experiments

When running experiments, unify by decision: “What is the probability this experiment changes the decision?” Not “what’s the chance of a lift on X metric” times five. Multiple metrics create a subadditivity trap; pick the threshold that matters and judge against that.

For cost estimates, consider P50/P80 budgets for the whole program before itemizing. Then, as you unpack line items, keep the fence: the sum of P50s rarely equals the program P50 due to correlations. Note the difference explicitly.

In personal life

Planning a trip? Instead of listing ten “things that could go wrong” and spiraling, set a whole probability of “trip majorly disrupted.” Treat the list as routes into that bucket, not as separate cumulative threats. You will feel calmer and pack better.

Buying insurance? Don’t let the brochure’s list of covered events inflate your sense of risk. Ask, “What is the chance I’ll file a claim of any kind this year?” Then judge whether the premium is worth it given that whole.

A short origin story: the research, without the dust

The modern account of subadditivity comes from support theory (Tversky & Koehler, 1994). In experiments, people judged probabilities of “packed” events (like “death from natural causes”) lower than the sum of “unpacked” sub-events (“heart disease,” “cancer,” “other”). The mere act of detailing increased judged probability. Fox and Tversky (1998) further explored unpacking and its effect on judged likelihoods. Rottenstreich and Tversky (1997) showed how emotions and vividness affect probability weighting, which often intensifies the effect. The common thread: our judgments track the description, not the math.

You don’t need to memorize the papers. Just remember: details feel like evidence, and our brains pay them interest.

FAQ

Q1: Is subadditivity always bad? Not always. Unpacking can surface real risks or opportunities you would have missed. It becomes bad when you let detail change the total probability without new information. Use unpacking to find paths. Use additivity to keep the total honest.

Q2: How is this different from the conjunction fallacy? Conjunction is judging “A and B” as more likely than “A.” That’s a logical error in ordering. Subadditivity is inflating the total when you split an event into parts. You can be guilty of both in the same meeting, but they’re distinct mistakes.

Q3: What’s one quick fix I can use today? Write the whole probability first. Literally fill in: “Chance of X: __%.” Then unpack. If your parts imply a higher total, trim them or re-label overlaps. It’s boring and it works.

Q4: Does this matter if we don’t use numbers and only talk in words? Yes. Your brain still “adds.” Words like “likely,” “possible,” and “rare” accumulate in your sense-making. If you won’t put numbers on it, at least limit the number of “likely” labels and state a single “overall” likelihood at the end.

Q5: How do I handle correlated risks? Draw a quick dependency map. If A makes B more likely, don’t sum A and B. Either model a combined scenario or estimate the conditional probability of B given A. If you don’t want math, label with text: “B is mostly a consequence of A.”

Q6: We have a risk register with 40 items. What now? Compress into MECE themes and assign an overall probability per theme. Then decide what you’ll do about the top two themes. Long lists create the illusion of overwhelming danger. Short lists with explicit whole probabilities create movement.

Q7: Can tools help? Yes. Use spreadsheets that enforce a 100% total for outcome sets. Track forecasts with Brier scores and keep a scoreboard. The friction of structure counters the inflation of details.

Q8: What if stakeholders demand detail? Give it to them—but fence it. Start with the whole number, then show the unpacked breakdown as routes into that whole. Put a line that says, “Sub-events are pathways; totals cannot exceed 100%.” It earns trust without feeding bias.

Q9: How do I teach this to my team without sounding pedantic? Tell a story from your own work where the sum of parts fooled you. Then introduce a tiny rule: “Whole first, parts second.” Run one additivity check in the next planning meeting. Let the results speak.

Q10: Are there cases where unpacking is essential? Absolutely—safety engineering, aviation, medicine. Unpacking reveals single points of failure. Even there, experts constrain the whole (fault trees, minimal cut sets). Emulate that discipline: detail with structure.

Checklist: subadditivity sanity pass

  • Write the whole probability before listing parts.
  • Partition MECE when possible; label overlaps when not.
  • Keep a 100% pie for outcome sets; don’t exceed it.
  • Don’t sum non-disjoint paths; check dependencies.
  • Anchor with base rates; adjust with specifics.
  • Separate probability from impact in two columns.
  • Calibrate: log forecasts and review monthly.
  • Limit granularity early; add detail only when it changes the decision.
  • Do an additivity pass before sign-off.
  • If details made it feel more likely, ask what truly changed.

Wrap-up

Subadditivity is sneaky because it piggybacks on something good: curiosity. We unpack because we care. We want to see how things could happen, where they might snag, how we might steer around the boulder. But detail isn’t a promise. It’s just light. Without structure, it blinds.

Guard your decisions with one simple move: size the whole before you slice the pie. Then add detail without letting it swell the total. You’ll forecast better, plan saner quarters, and sleep a little easier.

We’re building a Cognitive Biases app at MetalHatsCats to make this kind of thinking natural: quick prompts that ask “whole first?”, nudges to normalize to 100%, and short drills that sharpen calibration. If you want your team to hear the math inside the story, we’d love to have you try it when it’s ready.

Details matter. Just not more than the whole.

Cognitive Biases

Cognitive Biases — #1 place to explore & learn

Discover 160+ biases with clear definitions, examples, and minimization tips. We are evolving this app to help people make better decisions every day.

Get it on Google PlayDownload on the App Store

People also ask

What is this bias in simple terms?
It’s when our brain misjudges reality in a consistent way—use the page’s checklists to spot and counter it.

Related Biases

About Our Team — the Authors

MetalHatsCats is a creative development studio and knowledge hub. Our team are the authors behind this project: we build creative software products, explore design systems, and share knowledge. We also research cognitive biases to help people understand and improve decision-making.

Contact us