[[TITLE]]

[[SUBTITLE]]

Published Updated By MetalHatsCats Team

You tell a teammate, “Everyone hates long kickoff meetings,” and you watch them wince. Not because they disagree, but because you just assumed they felt about meetings the way you do. The next week, you push the team to cut the agenda in half. Post-launch, it turns out the junior folks were lost because the collective context got axed. The team drops a sprint to fix needless confusion. You didn’t have bad intentions. You simply pressed your preferences onto other minds and called it consensus.

Assumed Similarity Bias is our habit of believing others are more like us—more aligned with our ideas, preferences, and feelings—than they truly are. It feels warm and cooperative. It often backfires.

We’re the MetalHatsCats Team. We’re currently sketching and shipping a Cognitive Biases app because we want fewer avoidable mistakes and more “ohhh” moments in our days. This essay is our field guide to seeing and steering around Assumed Similarity Bias: what it is, how it sneaks in, real-world stories, and the practical moves that keep it from eating your project, your product, or your relationships.

What is Assumed Similarity Bias and why it matters

At its core, Assumed Similarity Bias is projection. You use your own beliefs, preferences, or behavior as the baseline, then mirror it onto other people—coworkers, users, voters, your partner, your kid. You aren’t trying to erase differences; your brain just saves effort by substituting “me” for “them.”

Psychologists have charted flavors of this for decades. The false consensus effect shows we overestimate how common our own views are (Ross, Greene, & House, 1977). Projection bias, a cousin, makes us expect others’ future states to look like our current state (Loewenstein, O’Donoghue, & Rabin, 2003). Assumed similarity wraps those moves together and applies them across contexts: inside teams, in design choices, in negotiation, in hiring, in friendships.

Why it matters:

  • It creates invisible fractures. A product team of night owls assumes “no one wants morning notifications,” and they ship a default that irritates half their users.
  • It dulls research. If you “already know” what customers need—because you would need that—you stop seeing what’s actually happening.
  • It breeds polite silence. People who are unlike you withdraw when they sense their differences are unwelcome or invisible.
  • It wastes time. Misreads lead to rework, re-education, or repair.
  • It shrinks imagination. If the only mind you consult is yours, you build narrow things for narrow worlds.

Assumed Similarity Bias doesn’t mean we’re selfish. It means the fastest story our mind can write is “probably like me.” In fast-moving work, fast stories dominate. Our job is to slow them just enough to make room for truth.

Examples

Stories make biases visible. It’s easier to catch a vibe than a definition. Here are cases we’ve met in teams and products, the places where “they’re like me” quietly steers the ship.

The product that baffled its best users

A small analytics startup builds a dashboard for power users. The founders live in SQL. They assume their early adopters want raw tables with minimal guardrails because that’s how they like to work: fast, flexible, no training wheels. They launch a dazzling, dense dashboard.

Within a month, support tickets spike. Their largest customers are domain experts, not analysts. They want guardrails and friendly defaults. They don’t want to feel stupid for clicking the wrong tiny icon. Demos stall. The founders keep saying, “But this is how we do it,” as if that’s the baseline for all minds. They patch tooltips. Churn rises anyway.

Reality: the “power” the team craved is the anxiety their users avoided. The assumed similarity was not a small error; it was the product’s spine.

The hiring panel that hired itself

A hiring panel of three senior engineers screens candidates for a platform role. They agree that “owning the code” and “guarding the architecture” define excellence. In interviews, they glow when candidates talk about refactoring or standards, and they feel meh when candidates emphasize cross-functional empathy or onboarding.

Six months later, the team is heavy on brash code guardians and light on glue people. On-calls are fine; team glue is gone. Demos feel tense. New teammates bounce because nobody teaches.

The panel didn’t try to exclude. They assumed the ideal candidate looks like them at their best day. They re-planted themselves and called it a team.

The “obvious” UX that alienated older adults

A fintech app replaces step-by-step forms with a single elegant scroller. It looks calm and modern. The team is all under 35 and can parse visual hierarchy by reflex. Post-launch, older users report “I can’t tell what’s tappable.” NPS drops among the 55+ segment. The design review notes repeat “it’s obvious” because it was obvious—to them.

Obviousness is a mirage built from your past interfaces. When teams assume their pattern library lives in everyone’s head, they ship frustration with a soft glow.

The meeting “efficiency” that erased context

A manager trims weekly standups from 30 minutes to 10. She hates long meetings and believes speed equals respect. Two quiet folks stop raising risks because they need a little time to find words. One mis-scope snowballs into two lost sprints.

Short meetings weren’t the villain. Assumed similarity was. Efficiency on one axis (time) murdered efficiency on others (clarity, alignment, safety).

The negotiation that turned into a stalemate

A founder assumes the investor across the table values control the way she does. She believes board seats will be the sticking point. She digs in there and gives ground on liquidation prefs.

Turns out the investor cares most about downside protection and sees the board seat as symbolic. The founder defended the wrong hill because she mapped her motives onto the other side. Deal breakers were swapped in her head.

The “universal” schedule that burned a teammate

A global team sets core hours “for collaboration.” The lead prefers early mornings, so the core block lands at 7–11 a.m. Pacific. The Berlin teammate becomes a ghost at dinner for months. Morale drops. He doesn’t complain because he doesn’t want to be “difficult.”

When the lead finally asks directly, she hears the real pattern: the teammate does his best work after 9 p.m. and needs quiet mornings for family. No one intended harm. The default was a mirror.

The classroom that misread silence

A teacher asks, “Does this make sense?” Several students nod. Others look down. She assumes nods mean understanding and moves on. Later, a pile of wrong answers. The nods meant “I want to look like I get it,” not “I get it.” The teacher projected her own habit—speaking up when lost—onto students with a different survival strategy.

Assumed similarity doesn’t just steer products and teams. It explains dinner arguments, text misfires, and group chats that slowly die. The common thread: when we mistake our inner weather for the whole forecast, we predict badly.

How to recognize and avoid it

Assumed Similarity Bias thrives in speed, sameness, and silence. You won’t “solve” it with a definition. You’ll beat it with small frictions that surface difference early.

Here are field-tested moves. Use them like seatbelts.

Run a quick “Other-Mind Check” before decisions

Right before deciding, ask: “If someone with a different background or constraints were in this moment, what would they want, fear, or choose?” Don’t guess forever. Write two lines. The act of naming a non-you perspective changes what you notice.

Add a constraint: name a concrete person. “What would Sana—who uses Android on a spotty connection—see here?” The more specific, the better.

Separate “preference” from “requirement”

We often package our wants as needs. “We need a dark theme” might be “I prefer dark theme at night.” If you name it as preference, you create room for others’ requirements. In design reviews, tag comments as P (preference) or R (requirement). Watch how the conversation softens and clarifies.

Eject “obvious” and “everyone” from your vocabulary

Strike phrases like “everyone knows,” “obviously,” “no one wants,” “users hate,” and “we all agree.” Replace with “I believe,” “our last test showed,” “in segment X,” or “three of five users.” Language shapes your map. Choose map words, not mirror words.

Slow down with a micro-interview

Before shipping a change that rests on “people like X,” grab two people unlike you—different role, seniority, device, culture. Ask them to think aloud as they use the thing or hear the pitch. Ten minutes per person. Record or take notes. You’ll find at least one assumption to revise 80% of the time.

Scaffold disagreement and ask for weird

Invite divergence directly. Instead of “Any feedback?” ask, “Tell me one thing that could fail for someone unlike me.” Or try, “What’s one assumption I’m making because of my background?” You won’t get magic every time, but you’ll model that difference is welcome.

Use base rates, not your gut, for population guesses

If you’re making a claim about “most,” grab a base rate. Market share, device usage, accessibility stats. Your own usage is not a sample. You’re a sample of one with weird edges. Base rates save you from over-fitting your life (Kahneman & Tversky, 1973).

Write decision memos that name the target mind

When you log a decision, include: “Primary user stories,” “Non-goals,” and “Known non-us cases.” Force the team to say who this is for and who it’s not for. If your memo says “for everyone,” rewrite it. People are not everyone. “Everyone” is a stall.

Design defaults that forgive, not assume

Good defaults admit difference. If you don’t know a user’s preference, pick a forgiving default and make change cheap. For example, choose a notification default that alerts gently and make quieting or amplifying trivial. The move is not “choose perfectly.” It’s “reduce regret for being wrong.”

Create disagreement rituals

A single teammate waving their hand won’t beat the comfort of similarity. Ritual helps. Try “two up, one down” at the end of a review: two things you like, one thing you’d change for someone unlike you. Or a “premortem” where each member writes how the idea fails badly for a group they don’t belong to. Read out loud. Then decide.

Ask spectrum questions, not yes/no

In surveys, replace “Do you prefer X?” with “On a scale from ‘strongly prefer A’ to ‘strongly prefer B,’ where are you?” Or “In which situations would this be helpful versus confusing?” Spectrum answers reveal edges. Yes/no compresses difference into noise.

Treat silence as data, not agreement

If a room is quiet, don’t assume consent. Ask by name. Offer async routes. Accept anonymous input for sensitive topics. People differ in power, safety, and style. Your comfort isn’t a thermometer for the room.

When in doubt, run a tiny experiment

Before committing, A/B test, pilot in one region, or trial with a small cohort. If you’re confident in a similarity-based bet, this won’t hurt you. If you’re wrong, you’ll learn cheaply. Run time-boxed experiments and publish the result, especially when a decision rests on “people like me.”

Checklist: Spotting and disarming Assumed Similarity Bias

  • Before deciding, write one sentence from a non-you perspective.
  • Tag your comments as preference (P) or requirement (R).
  • Ban “obvious,” “everyone,” and “no one” from decision talk.
  • Get two users or teammates unlike you to try the thing for 10 minutes.
  • Pull one base rate before claiming “most” or “usually.”
  • Name who it’s for and not for in the decision memo.
  • Choose forgiving defaults and make changes easy.
  • End reviews with “two up, one down” for a non-you group.
  • Ask spectrum questions. Avoid yes/no on preferences.
  • Treat silence as missing data, not agreement.
  • Pilot risky assumptions with a small, time-boxed test.

These moves won’t slow you down as much as rework will. They sharpen reality where your brain tries to round off corners.

Related or confusable ideas

Biases often overlap. Here’s how Assumed Similarity Bias bumps into neighbors and how to keep the lines clean.

  • False consensus effect: Overestimating how many people share your beliefs or behaviors. It’s a classic lab finding and everyday hazard (Ross, Greene, & House, 1977). Assumed similarity includes this but also covers subtle projections about preferences and feelings inside teams.
  • Projection bias: Expecting others (or your future self) to share your current state—hungry, stressed, motivated (Loewenstein, O’Donoghue, & Rabin, 2003). It’s more about states over time; assumed similarity is broader across people right now.
  • Typical mind fallacy: The informal name for assuming “my mental processes are the default.” It’s internet-famous, not a formal theory, but it nails the vibe: “Because this is intuitive to me, it’s intuitive, period.”
  • Naïve realism: The belief that we see the world as it is, and those who disagree are biased, uninformed, or irrational (Ross & Ward, 1996). Assumed similarity often rides on naïve realism—if my take is reality, then others must share it.
  • Ingroup homogeneity effect: Seeing members of your own group as more similar to each other than they really are. Sometimes we flip it: we assume our ingroup is normal and outgroups are “other,” but inside a team we also erase differences because sameness feels safe (Quattrone & Jones, 1980).
  • Empathy gap: Misjudging other people’s feelings or our own in different emotional states. If you’re calm, you think the anxious person should “just think it through.” That’s assumed similarity on an emotional channel (Loewenstein, 1996).
  • Availability heuristic: Mistaking what’s vivid for what’s common. Your own experiences are the most available, so they feel representative. Assumed similarity plugs into availability and magnifies it.
  • Dunning–Kruger effect: Overestimating your knowledge when you lack skill. Not the same thing, but when you assume your way is universal and also overrate your understanding, the combo is chaos.
  • Pluralistic ignorance: Everyone privately disagrees but publicly goes along because they think they’re the odd one out (Prentice & Miller, 1993). Assumed similarity feeds the illusion that “they all agree,” which keeps the silence going.

The practical difference: assumed similarity is the mental shortcut; these related ideas are the flavors and consequences. If you learn one habit—asking “who isn’t like me?”—you cut into many of them at once.

Wrap-up

Assumed Similarity Bias is charming. It feels like warmth and common ground: “We’re on the same page.” Sometimes you are. Often you’re not. And the cost of pretending you are is paid later—in rework, in quiet resentment, in churn, in “we should have known.”

The fix isn’t cynicism. It’s curiosity sharpened into routine. Speak for yourself and ask for the rest. Run small tests. Name who this is for. Listen to the person who isn’t you. That’s not a lofty virtue. It’s a practical lever.

We’re building a Cognitive Biases app because we screw this up too—at the whiteboard, in DMs, with family plans on Sunday mornings. The app will nudge us with short exercises, tiny checklists, and “catch-it-in-the-act” prompts. Not to make us perfect. To make us a bit more awake right when the brain wants to coast.

If you remember one line, let it be this: you are a fascinating dataset of one. Build and decide like you know that.

FAQ

Q: Is assuming similarity always bad? A: No. Quick alignment often comes from shared context. If you and your pair programmer have worked together for two years, assuming some similar habits reduces friction. Problems start when you leave your bubble—new teammates, new users, new markets—and keep using the same shortcut without checking.

Q: How do I challenge assumptions without derailing momentum? A: Time-box it. Say, “Let’s spend eight minutes naming one way this fails for someone unlike us.” Then either run a tiny test or document the risk and move on. Rhythm beats righteousness. A small ritual done every time keeps speed and sanity.

Q: What if leaders shut down differences with “We all know…”? A: Ask for permission to pilot. “Could we run a one-week test with the alternate default for 20% of users?” Leaders often resist abstract disagreement but accept concrete experiments. If that fails, gather two data points from real users and bring them to the next review. Evidence changes the tone.

Q: How can I spot this bias in myself, fast? A: Watch for sweeping words in your mouth: “obvious,” “everyone,” “no one,” “always,” “never.” When they show up, pause and rewrite. Also track moments when you feel surprised or annoyed by other people’s preferences—that’s often your mirror cracking.

Q: What’s a good first step if our team has been building for “people like us”? A: Start with a user council made of five people unlike the team on three axes (age, experience, device, region). Meet them monthly for 45 minutes. Watch them use your stuff. Pay them. The council will recalibrate your “obvious” meter within two sessions.

Q: How do we bake this into hiring? A: Add a structured interview block that evaluates “how the candidate designs for non-self users.” Present a brief with a user unlike the candidate. Score their questions, not their pitch. Do they seek base rates, ask spectrum questions, propose forgiving defaults? You’re hiring a mind, not a mirror.

Q: What if my culture values harmony and people won’t disagree openly? A: Offer async, anonymous inputs. Use polls with spectrum options. Invite people to write “premortems” privately. Then summarize patterns without naming names. You can respect harmony and still surface difference.

Q: How do I keep my personal relationships safe from this bias? A: Switch from “Do you want X?” to “What would good look like for you?” Then reflect back what you heard. When in doubt, ask for concrete examples: “Show me a Saturday that feels ideal to you.” Most blowups are mismatched maps, not malice.

Q: Can data replace asking people? A: Data is a flashlight, not a face. It shows what happened, not why. Pair behavioral data with qualitative touches—two calls, three notes, one video. The combo catches differences your dashboards flatten.

Q: How often should we revisit our assumptions? A: Tie it to change. New market, new persona, new channel, new teammate, new manager—run the checklist. Otherwise, review quarterly. Assumptions decay like bread. Freshen them.

Checklist

  • Write one “other-mind” sentence before each major decision.
  • Mark comments as P (preference) or R (requirement).
  • Replace “everyone/obvious/no one” with evidence or segment language.
  • Get two unlike-you users or teammates to try the thing for ten minutes.
  • Pull one base rate before saying “most people.”
  • Name who it’s for and not for in your decision memo.
  • Pick forgiving defaults; make changes easy.
  • Close reviews with “two up, one down” for a non-you group.
  • Ask spectrum questions instead of yes/no.
  • Treat silence as missing data; offer async or anonymous routes.
  • Pilot risky assumptions with a small, time-boxed test.

From all of us at MetalHatsCats: the world is richer than your reflection. That’s the best news for building anything worth keeping.

Cognitive Biases

Cognitive Biases — #1 place to explore & learn

Discover 160+ biases with clear definitions, examples, and minimization tips. We are evolving this app to help people make better decisions every day.

Get it on Google PlayDownload on the App Store

People also ask

What is this bias in simple terms?
It’s when our brain misjudges reality in a consistent way—use the page’s checklists to spot and counter it.

Related Biases

About Our Team — the Authors

MetalHatsCats is a creative development studio and knowledge hub. Our team are the authors behind this project: we build creative software products, explore design systems, and share knowledge. We also research cognitive biases to help people understand and improve decision-making.

Contact us