[[TITLE]]
[[SUBTITLE]]
You’re holding a tape measure that only shows inches, and someone asks you to size up the ocean. You stretch the tape, squint at the numbers, and declare: 18 inches. Anyone watching would laugh, but we make that move every day. We drag a human ruler across everything—the minds of animals, the rhythms of forests, the behavior of markets, the future of AI, even the possibility of life beyond Earth—and we decide what matters in units that fit us.
Anthropocentric thinking is judging everything by human standards—our senses, our values, our timelines, our comfort.
At MetalHatsCats, we’re builders first. We’re crafting an app called Cognitive Biases to help people spot the invisible rules in their thinking. This piece is one of those rules. Anthropocentric thinking is powerful, convenient, and often wrong in costly ways. Let’s walk through it and learn how to design, decide, and imagine without that human-sized tape measure strangling our view.
What Is Anthropocentric Thinking—and Why It Matters
Anthropocentric thinking sets “human” as the default frame of reference. It says: what’s good for humans is good, what looks like us is meaningful, what works for us is normal, what doesn’t is irrelevant or broken. It doesn’t always show up as arrogance. Often it’s just the silent assumption that the human way of thinking, sensing, and valuing is the baseline for reality.
Why it matters:
- It shrinks the map of what exists. We miss forms of intelligence, communication, or value that do not mirror our own.
- It loads our decisions with hidden costs. We design products, policies, and ecosystems that are brittle because they only “fit” humans in the near term.
- It blinds science and technology. We skew experiments toward what we can easily measure, and we train models on what we can easily label.
- It narrows our moral circle. We treat non-human life as scenery or resource rather than co-participants in shared systems.
Anthropocentrism is tempting because it’s easy. Human brains evolved for fast, useful guesses, not cosmic humility. But we can still build habits that stretch the tape measure.
Stories and Snapshots: Where the Human Ruler Shows Up
This isn’t a conceptual lecture. It’s a field guide. Here are places we see anthropocentric thinking tripping people—and how the story changes once you swap rulers.
1) Animal Minds: If It’s Not Like Us, It’s “Not Smart”
For a long time, scientists tested animal intelligence with human-flavored puzzles: mirrors, pointing, word labels. If a crow doesn’t recognize itself in a mirror, the story went, it lacks self-awareness. Then corvid researchers designed crow-friendly tests—like future planning tasks with caching—and found complex memory, deception, and tool use (Emery & Clayton, 2004). Octopuses solved puzzles and used coconut shells as mobile homes. Different body, different habitat, different intelligence. The mirror wasn’t the measure; it was the mistake.
We believed “intelligence = language-like behavior.” Turns out “intelligence = fit-to-context problem solving.” You only see it when you stop asking animals to pass human exams.
2) Ecosystem “Services”: Nature as Employee of the Month
We often value nature in terms of what it can do for us: pollination, carbon capture, flood control. This ecosystem services framing moved environmental debates forward by making value legible in dollars (Costanza et al., 1997). But it also risks flattening non-human value into a spreadsheet shaped like our needs. A wetland isn’t valuable because it helps a nearby city. It’s valuable also because it sustains a world bigger than us, including life-ways we don’t yet understand. When policy only counts human benefits, long-term resilience—soil webs, microbe communities, migratory corridors—gets shortchanged. Then we’re surprised when the costs come due.
3) “Normal” Weather and the Shifting Baseline
Ask people to describe “normal” summer and they’ll point to their childhood. That’s shifting baseline syndrome—each generation inherits a degraded normal and calls it stable (Pauly, 1995). We calibrate climate to our memories, not to 100-year data. So droughts, floods, and heat waves look “weird” or “political” until they shorten seasons and burn fields. The human ruler collapses long climate systems into “What did it feel like when I was ten?” Good governance needs generational timeframes, not weekend comfort.
4) Design That Ignores Bodies That Aren’t Like Yours
Hand dryers that don’t detect dark skin. Voice assistants that miss accented speech. VR headsets built for interpupillary distances of adult men. “Works fine on my machine” becomes “works for people like me.” This isn’t malice; it’s a small sample problem with human consequences. In one famous case, commercial facial recognition systems performed worst on darker-skinned women because of biased training data (Buolamwini & Gebru, 2018). When your ruler is your own reflection, you ship exclusion as a feature.
5) Medicine’s Mouse Trap
We test drugs in mice because they’re easy to breed and control. Then we wonder why treatments don’t translate. Mouse biology is a stand-in for convenience, not a law of nature. It’s a human-centered choice (time, cost, ethics trade-offs) masquerading as neutral science. The fallout is real: failed trials, wasted money, missed cures. The better move is building a ladder of evidence that respects interspecies differences—organoids, computational models, multi-species replications—not just scaling up from off-the-shelf rodents.
6) Space, Aliens, and the Cosmic Mirror
We search the sky for radio signals because that’s what we know. We imagine life as carbon-water, temperate planets, solar-like suns. That’s “carbon chauvinism.” Maybe right. Maybe not. The Copernican principle warns against assuming our vantage point is special, but anthropocentric habits persist: we picture a “technological civilization” as a version of us with faster rockets. The weirder lesson from the Fermi paradox might be that we’re bad at imagining minds that aren’t basically human with tentacles.
7) Time Horizons in Economics and Policy
We discount the future heavily because human lifespans are short and politics shorter. We weigh present gratification more than long-term health—a rational move for individuals, a destructive one for systems. When your accounting frame is an election cycle, boreal forests become lumber, not climate regulators; aquifers become inventory, not inheritance. Anthropocentric time is quick. Planetary time is patient. Decisions improve when we can see both.
8) AI That Thinks We Are the Universe
We train models on our labels, our categories, our goals. Of course we do. The issue is believing those categories are “real” instead of “useful to us.” A model that predicts “beauty” from faces embeds cultural bias. A “toxicity” detector that flags minority dialects as hateful inherits human prejudice. Datasets reflect the hands that made them (Gebru et al., 2018). AI alignment debates can slide into anthropocentrism too: we assume human values are coherent, stable, and transferable. They aren’t. Values are negotiated within ecosystems of people, histories, and constraints.
9) Cities Built Like Mirrors
Sidewalks that end at highways. Buildings with no shade in places that hit 45°C. The design says: “Car-owning, able-bodied commuters at 21°C count most.” Everyone else—kids, elders, disabled people, non-drivers, birds—are afterthoughts. A city measured by cars produces “congestion.” A city measured by people produces “proximity.” Change the ruler, change the diagnosis, change the build.
10) The WEIRD Trap in Psychology
Much of psychology is based on studies of Western, Educated, Industrialized, Rich, Democratic participants, aka WEIRD (Henrich, Heine, & Norenzayan, 2010). Those participants aren’t representative of humanity, but the field treated their responses as universal. What looks like a “human cognitive bias” might be a “Harvard undergrad cognitive bias.” Using the wrong ruler turns curiosity into overconfidence.
How to Recognize and Avoid Anthropocentric Thinking
We made this section for builders, researchers, policy folks—anyone who makes decisions that ripple outward. Tape it to your monitor.
The Practical Checklist
Use it straight or adapt it to your team. The goal is to widen your frame.
- ✅ Name your vantage point. Write down: whose needs, whose senses, whose goals are centered? Who is left out?
- ✅ Flip the unit. Ask: if we measured this in non-human units (seasons, soil health, acoustic ranges, microhabitats), what would change?
- ✅ Change the evaluator. Could a person unlike you, a non-expert, or an external reviewer grade this? Could a different metric—energy efficiency, biodiversity, error costs—be the success criterion?
- ✅ Extend the time window. What does this decision look like in 1 year, 10 years, 50 years? What inertia or irreversibility are you creating?
- ✅ Expand the sample. Include edge cases by design: dialects, devices, body types, climates, species interactions. Not as “nice-to-have,” as requirement.
- ✅ Try alternative intelligences. Test for problem-solving in context, not just human-style tasks. If it swims, don’t give it a ladder.
- ✅ Stress-test with absence. Remove the human from the scene. What dynamics remain? Which still matter? Which break?
- ✅ Price the externalities. List the costs pushed to other species, communities, or future humans. Bring at least one into your budget.
- ✅ Use plural metrics. Combine human-centered outcomes (comfort, speed) with system metrics (resilience, redundancy, diversity).
- ✅ Keep the “unknown” column. Reserve space in your plan for unknown unknowns. Commit to iteration and monitoring.
If you only do three: name the vantage point, extend the time window, expand the sample.
Tactics for Teams
- Write a “non-user persona.” Example: a city hawk, a street tree, a night-shift worker. In a sprint review, ask how your decision affects them.
- Add a “species-aware” acceptance criterion. If you’re building outdoor lighting, specify lumen limits, color temperature, and timing that reduce harm to insects and birds.
- Run a WEIRD check. If your study or dataset is homogenous, label it clearly. Don’t generalize beyond your sample.
- Measure failure for the least powerful. Evaluate harms where they compound: low-income neighborhoods, non-dominant dialects, non-drivers, non-human life in the area.
- Build exit ramps. If you’re wrong about your assumptions, how quickly can you reverse or mitigate?
Related and Confusable Concepts
Anthropocentrism overlaps with several ideas. They rhyme, but they aren’t the same song.
- Anthropocentrism vs. Anthropomorphism: Anthropocentrism measures everything by human standards. Anthropomorphism attributes human traits to non-human things—like saying your Roomba “wants” to go home. You can be anthropocentric without being anthropomorphic and vice versa (Epley, Waytz, & Cacioppo, 2007).
- Speciesism: Preferring human interests over non-human animals because of species membership (Singer, 1975). It’s a moral stance; anthropocentrism is a cognitive frame. One can feed the other.
- Egocentric Bias: Centering yourself. Anthropocentrism centers humans as a group. Both shrink the map, but one is “me-first,” the other is “us-first.”
- WEIRD Bias: Overgeneralizing from a narrow human subpopulation (Henrich et al., 2010). It’s a special case of anthropocentrism—“my kind of human = the human.”
- Present Bias / Short-Termism: Overvaluing near-term rewards. It fuels anthropocentrism’s short timelines but is not restricted to human-centeredness.
- Carbon Chauvinism: Assuming life must be carbon-based and water-dependent. It’s an astrobiology cousin of anthropocentrism—our own biochemistry as the default.
- Human Exceptionalism: The belief that humans are qualitatively different and superior. Anthropocentrism can exist without explicit superiority claims, but they often travel together.
A Builder’s Toolkit: Design With Context, Not Just Users
We build software and tools, so we think in patterns. Here are patterns that counter anthropocentrism when making things.
Pattern 1: Context-First Metrics
Stop optimizing a single, human-comfort metric. Blend:
- Comfort: response time, ease, clarity.
- Resilience: graceful degradation, redundancy, failover.
- Ambient impact: energy use, noise, light spill, wildlife effects.
- Inclusion: accuracy parity across demographics and dialects.
Example: For a “smart” outdoor sign, measure not just legibility but night-glow footprint, insect attraction, and energy usage per lumen-hour. You’ll choose warmer light, timed dimming, and shielded fixtures by default.
Pattern 2: Multi-Temporal Roadmaps
Plan in three lanes: now (1–3 months), horizon (1–3 years), legacy (10+ years).
- Now: usability and immediate value.
- Horizon: maintenance, environmental costs, migration paths.
- Legacy: sunset plan, recyclability, data stewardship.
When you explicitly discuss “legacy” every quarter, anthropocentric short-termism loses its grip.
Pattern 3: Dataset Stewardship
Treat datasets like living systems.
- Document provenance, consent, and coverage gaps (Gebru et al., 2018).
- Include edge dialects and low-resource languages.
- Measure performance parity. If you ship worse results to the margins, you’re baking in harm.
Pattern 4: Non-Human Personas
Before shipping any real-world-interacting product, run a non-human persona pass.
- The Night: What does this do to darkness, circadian rhythms?
- The Water: How does it alter runoff, microplastics, temperature?
- The Bird: Does it fragment habitat or cause collision risk?
It’s not sentimentality. It’s systems design.
Pattern 5: Reversible Decisions
Anthropocentric mistakes get expensive when they’re irreversible. Reduce lock-in.
- Use modular hardware.
- Store configurations as code with rollbacks.
- Pilot in diverse environments before wide release.
Research Highlights (Light Touch, Heavy Signal)
- WEIRD critique: A massive share of psychology uses Western undergrads, making “human nature” claims suspect (Henrich, Heine, & Norenzayan, 2010).
- Shifting baselines: Each generation normalizes ecological degradation (Pauly, 1995).
- Ecosystem valuation: Translating nature into dollars changed policy debates but risks narrowing value to human utility (Costanza et al., 1997).
- Anthropomorphism drivers: We project human traits onto non-human agents when we seek social connection or need prediction (Epley, Waytz, & Cacioppo, 2007).
- Algorithmic bias in vision: Commercial gender classification tools failed most on darker-skinned women due to dataset bias (Buolamwini & Gebru, 2018).
These aren’t trivia. They’re reminders that our instruments—surveys, datasets, categories—are built by humans, for humans, and carry our blind spots.
Field Exercises: Stretch the Ruler
You learn this by doing, not reading.
1) Audit a Decision You’ve Already Made Pick a product, policy, or study. For 30 minutes, complete the checklist. Write down one change you would make if you couldn’t prioritize human convenience. Share it with your team.
2) Build a “Non-User Impact” Section in Your PRD In your next product spec, add two paragraphs: impact on non-users (neighbors, animals, datasets, power grid), and time-horizon risks. Keep it in every review.
3) Run a “WEIRD Drill” If your user research is within a narrow population, schedule one sprint to diversify the sample. If that’s impossible, add a banner to your findings: “This applies to X demographic in Y context.”
4) Host a Shadow Systems Walk Walk (physically) or map (virtually) the systems your product touches: light, noise, data flows, water, heat, labor. Draw arrows. Note where humans aren’t the only stakeholders.
5) Pre-Mortem the Non-Human Failure Imagine your project “failed” because of a non-human factor: a heat wave, urban wildlife disruption, soil erosion, satellite glare. How could you design to prevent it?
What This Looks Like in Real Life
- A fintech team redesigns their credit scoring model to avoid punishing people without credit history. They use alternative data, but they explicitly exclude proxies for protected attributes, test with fairness constraints, and provide appeal pathways. They name the vantage point: “Access for financially invisible people.” The ruler grew.
- A city replaces bright blue-white streetlights with warmer, shielded fixtures on timers. Pedestrian safety stays high; insect mortality and night glare drop. They measured success not only in lumens, but in darkness preserved.
- A research lab studying “problem solving” in bees moves away from human-like mazes to foraging challenges bees face in the wild. They find route optimization that rivals our algorithms. The lab stops calling bees “simple.” Better science, fewer assumptions.
- A studio building an AR app includes field tests with older adults and people with motion sensitivity. They change default animations and add rest states. Adoption climbs outside the original demographic, and refund rates drop. This is not charity; it’s competence.
Wrap-Up: The Ocean Doesn’t Fit the Tape
Anthropocentric thinking is seductive. It keeps the world small and tidy, measurable by comforts we can count. But we live inside systems that don’t care about our rulers: oceans with their own physics, forests with old agreements, cities with forgotten pipes, datasets with ghostly biases, species with alien talents.
We’re MetalHatsCats. We make tools that help people see their thinking from the outside. Our upcoming app, Cognitive Biases, is one of them—built to surface patterns like this before they cost you clarity, money, or trust. If you build things, you carry responsibility and power. Expand your units. Widen your lens. Make room for quiet realities that don’t shout in human.
The work is not abstract. It’s a checklist on your wall, a person in your test group, a lumen on your street, a dataset note, a rollback plan, a timeline that outlives your day job. The tape measure can stretch. It’s your hand on it.
FAQ: Anthropocentric Thinking
Q1: Is anthropocentric thinking always bad? Not always. It’s efficient for everyday decisions and safety—designing door handles for human hands is fine. It becomes harmful when we mistake our convenience for universal truth, especially in policy, science, and infrastructure. Use it on small tools, not large systems.
Q2: How is this different from caring about human welfare first? Prioritizing human welfare is a moral stance. Anthropocentrism is a cognitive habit that narrows what you perceive and measure. You can still prioritize people while considering long-term system health, non-human impacts, and future generations. It’s about better sightlines, not less compassion.
Q3: What’s a quick test to catch it in my work? Ask: “Would my decision change if I had to live with its consequences as a different species or a future resident?” If the answer is yes, you’re likely leaning on a human-short-term ruler. Then check time horizon, sample diversity, and externalities.
Q4: Doesn’t science already control for bias? It tries, but methods reflect the questions we ask and the tools we have. WEIRD samples and convenience measures sneak in (Henrich et al., 2010). That’s why diverse samples, cross-species-aware designs, preregistration, and replication matter. Bias is a process problem, not a personal flaw.
Q5: If we avoid anthropocentrism, won’t decisions get slower and costlier? Some will, and they should—irreversible or widespread impacts deserve care. But many changes are cheap: better sampling, different default metrics, reversible designs. Slowing early often speeds later stages by preventing rework and backlash.
Q6: How do I balance business goals with non-human considerations? Make them part of the business goals. Track energy costs, regulatory risk, brand trust, and maintenance savings from resilient design. Many “non-human” metrics correlate with long-term profitability and stability. If they don’t, ask why your horizon is so short.
Q7: Isn’t ecosystem services framing necessary to win policy battles? It’s useful to make value legible (Costanza et al., 1997). Use it, but don’t confuse the map for the territory. Pair dollar values with rights-based and resilience-based arguments. Build policies that survive beyond budget cycles.
Q8: What about anthropomorphism—doesn’t that help empathy? Sometimes it does. Giving a river “personhood” can protect it legally; treating robots as social can ease adoption. But anthropomorphism can misread needs and cause harm. Use it like a metaphor, not a measure. Validate with non-human or context-true indicators (Epley, Waytz, & Cacioppo, 2007).
Q9: How can small teams apply this without a research department? Use the checklist. Run tiny pilots in diverse contexts. Borrow community testers. Document your vantage point and limits. You don’t need a lab; you need discipline and humility. Edge-case testing is often cheaper than production failures.
Q10: What role can AI play in reducing anthropocentrism? AI can widen perception—detect patterns humans miss, simulate long-term effects, translate across modalities. But it also amplifies our biases if we don’t curate data and objectives carefully (Buolamwini & Gebru, 2018; Gebru et al., 2018). The fix is not “AI instead of us,” it’s “AI with better rulers.”
Q11: How do I teach this to students or new hires? Use concrete drills: non-human personas, WEIRD checks, long-horizon scenarios. Assign a debiasing role in meetings: someone whose job is to name the vantage point and ask about externalities. Reward the behavior in reviews.
Q12: Can you give a one-sentence reminder I can paste in my docs? “Name the vantage, stretch the time, and check who pays.” Put it at the top of your PRDs and research plans.
The Last Note from MetalHatsCats
We build, we break, we learn. Anthropocentric thinking is one of those hidden defaults that keeps good ideas smaller than they should be. Our Cognitive Biases app exists because we want you to catch these defaults early, with tools that feel practical and human—ironically, yes, but in service of wider circles.
The ocean won’t fit your tape. Bring a different instrument. Build one if you must. That’s the work. That’s the fun.

Cognitive Biases — #1 place to explore & learn
Discover 160+ biases with clear definitions, examples, and minimization tips. We are evolving this app to help people make better decisions every day.
People also ask
What is this bias in simple terms?
Related Biases
Implicit Association – when your brain links things faster than you realize
Do some words or concepts seem to connect faster in your mind than others? That’s Implicit Associati…
Selection Bias – when the sample doesn’t reflect reality
Are you making conclusions based on a sample that doesn’t represent the whole picture? That’s Select…
Well-Travelled Road Effect – when familiar routes feel faster
Does your daily commute feel shorter than a new route of the same distance? That’s Well-Travelled Ro…