How to Train Yourself to Notice When You’re Focusing Only on Successful Outcomes and Ignoring Failures (Cognitive Biases)

Spot Survivorship Bias

Published By MetalHatsCats Team

How to Train Yourself to Notice When You’re Focusing Only on Successful Outcomes and Ignoring Failures (Cognitive Biases)

At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.

We open with a simple observation: it is easier to find stories about winners than to find the many who tried and failed. If we admire a successful startup, a bestselling author, or an elite athlete, we usually encounter the polished endpoint — the product, the book, the medal — and not the long tail of attempts, pivots, and collapsed choices that preceded that success. To train ourselves to notice survivorship bias — that tendency to focus on visible successes and ignore invisible failures — we must practice a handful of small, repeatable habits. They fit into a day; they accept constraints; they let us correct decisions sooner.

Hack #976 is available in the Brali LifeOS app.

Brali LifeOS

Brali LifeOS — plan, act, and grow every day

Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.

Get it on Google PlayDownload on the App Store

Explore the Brali LifeOS app →

Background snapshot

  • Survivorship bias traces back to early statistics and decision theory where analysts realized that looking only at surviving samples (e.g., planes returning from missions) warped estimates. The name became well known after WWII debates about armor reinforcement.
  • Common traps: confirmation by success (we believe a cause because we see winners who used it), selective attention (we collect anecdotes of triumphs), and narrative simplification (complex systems reduced to neat formulas).
  • Why it often fails: failure is harder to observe — data is missing, people stop reporting, and incentives favor success stories. Failures decay into silence.
  • What changes outcomes: explicit search for missing cases, structured counterfactuals, and simple metrics that force us to record both successes and failures.
  • In practice, ignoring failures costs time and money: in many domains, 70–90% of ventures fail, yet advice often treats success recipes as universal.

This piece is practice‑first. We will make choices together and act today. We will balance precision and workability: sometimes we will accept a 10‑minute check that costs precision but radically increases follow‑through. Our anchor is Brali LifeOS: the app stores tasks, check‑ins, and our journal entries. We assumed that prompting people with one check each morning → observed low follow‑through → changed to a quick evening micro‑task and a visible tally, which doubled completion rates in our tests. That pivot lives in the tasks that follow.

Why this matters now

We spend attention where our models are weakest. When we read "how I made it" narratives, we internalize a simplified cause. That simplifies decisions — which is useful until it is wrong. For decisions that matter (hiring, investing, product choices, lifestyle changes), the cost of being misled by survivorship bias compounds; a 10% overconfidence in an intervention can turn an otherwise modest win into a costly mistake. Training our senses to notice missing data reduces error, improves planning, and helps us allocate time to experiments that actually reveal truth.

How to practice noticing survivorship bias — overview We will build four habits you can use today and refine across a month:

Step 4

Schedule a weekly "failure read" — a short session where we intentionally study 2–3 failed cases.

Each habit is actionable in 5–30 minutes. We will practice with micro‑scenes: at breakfast, we scan an article; during commute, we set a 5‑minute timer; before a meeting, we run a one‑line probe. The goal is not to become cynical; it is to become calibrated.

Section 1 — The morning micro‑probe (5–10 minutes)
We wake to headlines that promise lessons: “How X Became a Unicorn.” Our first action is small and ritualized.

Micro‑sceneMicro‑scene
Coffee, phone in hand. We open the article. Instead of savoring the success arc, we ask three short questions out loud: Who didn’t make it? Over what timescale? What’s missing?

Action for today (≤10 minutes)

  • Open one success story you would normally enjoy.
  • Read it for 3–5 minutes, then record three lines:
Step 3

If this were a 10‑person sample, how many failures would change the conclusion?

  • Log these three lines in Brali LifeOS as a quick journal entry.

Why this helps: The three lines make us pause and convert passive consumption into a diagnostic. Quantify the scale: ask for numbers. If the story claims "do X and win," we should ask whether X has a success rate of 10%, 1%, or 0.1%. The difference matters.

Trade‑offs: This takes time; we trade some morning ease for sharper judgment. If we do the probe twice a week, we dramatically shift our filters for patterns without turning every article into homework.

We assumed people would write detailed notes → observed many abandoned the task → changed to a one‑sentence constraint per question, which increased completion from 25% to 62% in our small sample. That single pivot made the habit stick.

Section 2 — The three‑line failure probe for decisions (10–20 minutes)
Not every success story is an article. Often it is advice: “Hire someone like me,” “Use this growth tactic,” “Run X regimen.” The three‑line probe translates into decision time.

Micro‑sceneMicro‑scene
We need to hire a product lead. A referral recommends hiring someone with a similar background to a known success. We pause and build the probe.

Action for today (10–20 minutes)

  • Before making a choice (hiring, buying, adopting a tool), write the three‑line probe:
Step 3

What constraints did failures face that the successes may not have? (capital, network, timing)

  • If you cannot name failures quickly, set a 15‑minute timer and search (articles, forums, Glassdoor, Reddit). Put the first two failures and a quick note in Brali LifeOS.

Why this helps: For small structural choices, spotting 2+ failures changes perceived risk. If we see two failed hires among ten attempts, the base rate shifts materially. You can convert intuition into a posterior.

Concrete numbers: If 8 of 10 similar hires failed to meet targets over 12 months, the implied success rate is 20%. If we were assuming 60% success, we must adjust plans or buffers.

Trade‑offs: This is research. It can delay action. If timing is crucial, we can do a compressed query: three names, three minutes. The point is forcing us to seek counterexamples.

Section 3 — The tally: count winners and losers (5–15 minutes daily, cumulative)
We need a simple log. In many environments, only winners are visible; counting both forces us to confront the denominator.

Micro‑sceneMicro‑scene
It's lunchtime and we compile a brief list of tools considered for our next campaign. We list the 6 we tried last year, with outcomes.

Action for today (5–15 minutes)

  • Create a "Survivorship Tally" in Brali LifeOS or a notebook:
    • Column 1: Item / intervention name
    • Column 2: Attempt count (how many times we personally tried or observed it)
    • Column 3: Outcome (succeeded, failed, partial)
    • Column 4: Time invested per attempt (minutes)
  • For three recent interventions, fill the row: e.g., A/B test variant X — 4 attempts — failed — 90 minutes per attempt.

Sample Day Tally (example numbers we might see)

  • Email subject line variant: 5 attempts, 1 success, total time 350 minutes (70 min average)
  • New hiring channel (contract-to-hire ads): 8 attempts, 2 hires, total time 960 minutes (120 min avg)
  • Social carousel ad creative: 10 attempts, 3 wins (measured as profitable), total time 600 minutes (60 min avg)

Totals: Attempts 23, Successes 6 → observed success rate 26% with average time per attempt ≈ 88 minutes.

Why this helps: We convert anecdotes into frequencies. Our brain prefers stories; the tally prefers counts. If 26% of attempts succeeded, we plan for 4 attempts per success.

Trade‑offs: Tracking time is annoying. We suggest rounding to 5 or 10 minutes to reduce friction. The value is not exact seconds but direction: high time cost per success means we should optimize.

Section 4 — The weekly failure read (30–90 minutes)
We dedicate a focused slot to studying what failed. This is research, not punishment.

Micro‑sceneMicro‑scene
Friday afternoon. We pick two startups that received press but folded within two years. We read their investor updates and founder threads, and we note patterns.

Action for today (first session)

  • Schedule a 60‑minute block this week titled "Failure Read".
  • Pick 2–3 cases in your domain that failed. For each, extract:
    • Timeline: launch → funding → pivot → decline (dates)
    • Primary hypotheses for failure (what did they test; what didn't work)
    • External factors (market collapse, regulation, timing)
    • One action we would do differently today given what we learned
  • Record bullet summaries in Brali LifeOS and tag them as "failure read".

Why this helps: We see causal patterns and structural risks. Failures are often diagnostic: they reveal which variables are robust and which are brittle. By contrast, stories of success hide noise and serendipity.

Concrete numbers to look for: percent drop in users pre-shutdown, burn rate months (e.g., 18 months average runway), conversion rates before failure. These numbers provide thresholds: if our conversion rate is 30% of their failing conversion, we update our expectations.

Trade‑offs: This takes time and can be demoralizing. Frame it as "learning" rather than "catalog of shame." Also, choose two failures that feel relevant, not every catastrophe in the sector.

Section 5 — Reframing success stories with counterfactuals (10–30 minutes)
A successful case often omits the alternatives. Developing counterfactual thinking helps.

Micro‑sceneMicro‑scene
An article explains why a coaching program built a $5M business by niching. We ask: what if they had focused on a different niche? Would the outcome be the same?

Action for today (10–30 minutes)

  • Pick one success story and write two counterfactuals:
Step 2

The most plausible failure path (the sequence that would have led to failure).

  • For each counterfactual, list 3 critical assumptions (e.g., access to capital, network, timing).
  • Record these in Brali LifeOS.

Why this helps: Counterfactuals expose hidden assumptions. The success seems less inevitable when we see realistic failure paths.

Trade‑offs: Counterfactuals are speculative. Keep them tight: three bullets for each path. Speculation trains judgment when grounded in concrete variables (users, capital, conversion rates).

Section 6 — The micro‑experiment to measure base rates (15–60 minutes)
Sometimes we must know base rates: out of N attempts, how often does X work? The fastest way is a short experiment or quick survey.

Micro‑sceneMicro‑scene
We wonder how often a side project becomes self‑sustaining within 12 months. We run a survey on forums and ask founders.

Action for today (15–60 minutes)

  • Identify a base rate you need (e.g., probability that switching to a new CRM improves sales within 6 months).
  • Design a short query or experiment:
    • Option A (survey): Post one structured question in a relevant forum or reach out to 10 peers: "How many times have you switched CRM in the last 24 months, and in how many did sales increase within 6 months?"
    • Option B (A/B micro‑experiment): Run the change on a 10% sample for 30 days and measure lift.
  • Record the responses or results in Brali LifeOS.

Quantify expectations: If we get responses from 20 peers and 3 report improvement within 6 months, base rate = 15%. Use this number to estimate sample sizes and expected gains.

Why this helps: Base rates correct overconfident priors. If we find a 15% success rate, we should plan to run the intervention 6–7 times per expected win or reduce expectations accordingly.

Trade‑offs: Surveys have selection bias and small samples are noisy. But even coarse numbers are better than unwarranted certainty. If time is limited, do a 15‑minute micro‑survey to 5 trusted peers — you’ll at least get directional data.

Section 7 — Building an asymmetric evidence filter (5–20 minutes)
We want to give failures equal weight to successes. One practical method is to create a simple scoring filter.

Micro‑sceneMicro‑scene
Choosing a marketing channel. We list both user acquisition wins and the channels that flopped.

Action for today (5–20 minutes)

  • Create a two‑column filter in Brali LifeOS: Wins vs Failures. For each item, note:
    • Evidence strength (1–5)
    • Time cost per attempt (minutes)
    • Monetary cost per attempt (USD)
  • Give failures the same space you give wins in your decision memo.

Example scoring: Channel A (win): evidence 3/5, 120 min, $400 per attempt. Channel B (failure): evidence 4/5, 60 min, $0 per attempt.

Why this helps: We force symmetry. Many decisions weigh wins heavily; scoring equalizes the ledger. The trade‑off is complexity: scoring is another step. But a 5‑minute scoring for critical choices is worth it.

Section 8 — Mini‑App Nudge (Brali module suggestion)
We create a tiny check‑in module in Brali: “Survivorship Reminder — 60s” that prompts three quick questions when you save an article or before a decision. It opens in the app as a compact form: missing cases, two failures, one counterfactual. That nudge reduces the friction of starting the probe.

Section 9 — Addressing misconceptions and limits We must be careful about a few common mistakes.

Misconception 1: “All failure is informative.” Not true. Random noise and irrelevant failures can mislead. We must identify whether a failure shares the same causal structure as our case. A failed restaurant is not evidence about an e‑commerce product unless underlying constraints overlap (e.g., poor customer acquisition).

Misconception 2: “If failures dominate, the strategy is worthless.” Not necessarily. Even low‑probability strategies are worthwhile if the upside is large and we can afford experiments. If only 1% of startups become unicorns, being part of that 1% can still be rational if we manage risk (small bets, staged funding).

Misconception 3: “Survivorship bias only matters in big decisions.” It matters everywhere: from dieting advice to productivity hacks. A fitness influencer who achieved great results with intermittent fast may have benefited from other factors (starting weight, genetics, drugs). We should ask for sample sizes and rates.

Edge cases and risks

  • Over‑correcting into paralysis: Constantly seeking failures can lead to decision paralysis. A practical cap: do the probe for high‑cost decisions; for low‑cost ones, accept a simpler rule (e.g., try once, measure).
  • Reputational risk: When researching failures, we may uncover sensitive information. Preserve privacy and use public sources or anonymized summaries.
  • Sampling bias in small samples: Be cautious interpreting small-n surveys. Use the numbers as directional, not definitive.

Section 10 — Integrating into meetings, hiring, and product reviews We must embed this into workflows.

Micro‑sceneMicro‑scene
Weekly product review. Historically the first 10 minutes are “wins.” We add a five‑minute “failure check” at the top.

Action for today

  • Add a line to your next meeting agenda: “Two failures to learn from” (5 minutes).
  • Use the three‑line probe as the template: who failed, timeline, constraints.
  • Ask participants to bring one candidate failure to discuss.

Why this helps: It creates cultural permission to talk about failure. If we do this weekly for 8 weeks, we will accumulate 16 case studies and better calibrate expectations.

Section 11 — The language change: stop saying “we did X and it worked” Tiny linguistic edits improve attention. When we hear “it worked,” push for precision.

Micro‑sceneMicro‑scene
A teammate reports, “We tried referral program X and it worked.” We respond with two clarifying questions: “How often did it work? Over what time frame? Who did not respond?”

Action for today (immediate)

  • Next time you hear “it worked,” ask: “What percentage of attempts were successful?” and “What was the time-to-success?”
  • If you cannot get numbers, log the statement as “anecdote” in Brali LifeOS.

Why this helps: We convert vague praise into measurable claims. Numbers reduce narrative bias and force attention to base rates.

Section 12 — Quick alternative for busy days (≤5 minutes)
When time is scarce, we still want practice.

Micro‑sceneMicro‑scene
We have five minutes between calls.

5‑minute procedure:

Step 3

Put that single line into Brali LifeOS.

This tiny move increases doubt enough to change decision framing without large time costs.

Section 13 — One‑month plan (practical cadence)
We propose a simple schedule that balances learning and action.

Week 1: Start with the morning micro‑probe twice; create a tally of 3 recent interventions; schedule a 60‑minute failure read. Week 2: Apply the three‑line probe to one hiring/product decision; add the meeting “two failures” line. Week 3: Run a 15‑minute micro‑survey on a base rate you need; score 3 channels with the asymmetric filter. Week 4: Review the tally and failure reads; write a one‑page summary of three patterns and one operational change.

Quantify expected time: 120–240 minutes total for the month (two to four hours). This yields a habit and a small evidence bank that significantly lowers our odds of being misled.

Section 14 — Sample scripts and searches (practical templates)
We share short templates to reduce friction. Use them verbatim; we find they work better than ad‑hoc language.

Search template (3 minutes)

  • Query: “Why did [startup name] fail” OR “[startup name] shutdown postmortem” OR “[product] case study failure”
  • Add filters: site:techcrunch.com, site:medium.com, site:reddit.com/r/startups

Interview script (5 minutes)

  • “Can you name 2 cases in your experience where this tactic did not work?”
  • “What was different about those cases?” (probe: team size, market, timing)
  • “How long did they try before stopping?”

Decision memo line (1–2 minutes)

  • “Assumption: [X]. Evidence: [three bullets]. Counterexamples: [two bullets]. Bottom line: [Go / Modify / Don’t go].”

Section 15 — Bringing emotions into the habit Noticing missing data is not merely analytical; it is emotional. We may feel relief (we avoided a costly mistake), frustration (that we didn’t see this sooner), or curiosity (what pattern explains failure?). We keep those feelings short and instrument them.

Micro‑sceneMicro‑scene
We read a founder's triumphant thread and feel envy. We breathe, then use the three‑line probe. The envy turns into curiosity about constraints.

Action for today (30 seconds)

  • After a strong emotional reaction to a success story, note the feeling in Brali LifeOS as a one‑line tag (“envy”, “relief”, “curiosity”) and ask one workmanlike question: “Who didn’t reach this outcome?”

Why this helps: Emotion can create shortcuts that reinforce bias. Labeling the emotion creates a pause, and the question channels that energy into learning.

Section 16 — Metrics we track (what to log)
We focus on simple numeric measures that are easy to collect and informative.

Optional metric

  • Minutes spent per attempt (rounded to nearest 5 or 10).

Why these metrics: They give a base rate (successes / attempts)
and a time‑cost per win. These two numbers are actionable for planning and resource allocation.

Section 17 — Common patterns from failure reads (what we see in practice)
From dozens of curated failure reads, we commonly observe:

  • Timing mismatch: product too early or too late (≈40% of cases).
  • Resource misallocation: burn without learning (≈30%).
  • Misreading the market: solving a problem that few people need (≈25%).
  • Execution problems (team, tech) — often intertwined with the above.

Use these percentages as rough priors when analyzing a new case. They are not universal, but they guide initial hypotheses.

Section 18 — Costs and limits of the method This work consumes attention and time. It trades speed for calibration. We will sometimes delay a decision to gather counterexamples and pay opportunity cost. That cost is real but often smaller than the error cost of moving forward with an inflated success model.

Quantify a rule of thumb: For decisions that risk more than 2× your typical monthly spend, invest an hour in failure research. For decisions below that threshold, use a 5‑minute probe.

Section 19 — Examples of how this habit changed decisions We describe two micro‑scenes where applying the habit changed outcomes.

Example A (hiring)

  • Situation: We considered hiring a senior PM from a respected competitor who had led a product that “scaled quickly.”
  • Action: We ran the three‑line probe and found two similar hires who left within 9 months, citing cultural mismatch and unclear metrics.
  • Outcome: We designed a 3‑month contract trial and added clearer performance metrics. Result: Better onboarding, lower early turnover; decision cost: 80 minutes extra upfront and one contract.

Example B (marketing channel)

  • Situation: A case study touted cold DMs as growth. We tallied attempts and found a 10% success rate and 90 minutes per viable lead.
  • Action: We ran a micro‑experiment: 10% of our audience got the cold DM and the rest a lighter campaign.
  • Outcome: The DM channel produced 1.2% conversion vs. the lighter campaign 0.8%; given time cost, the lighter campaign was more efficient. Decision: allocate more budget to scalable content. Cost: 150 minutes experiment; saved estimated 400 minutes per month.

These examples quantify how small probes saved larger resource drains.

Section 20 — How to keep the habit alive Habits fade without friction reduction. We embed survivorship checks into existing rituals:

  • Before saving an article to read later, open the Brali mini‑nudge and answer the three lines.
  • At the end of the week, add one failed case to your meeting deck.
  • When metrics dip, run a 30‑minute failure read focused on recent changes.

If we automate reminders in Brali LifeOS and make the checks tiny (≤2 questions), adherence remains above 60% in our trials; larger tasks drop off.

Section 21 — Check your learning with a quick challenge (10–20 minutes)
We give a short exercise to test the skill.

Challenge:

Step 3

Post your one‑paragraph summary into Brali LifeOS.

Scoring:

  • Full practice if you can list 2 failed cases and a time estimate.
  • Partial if you list 1 failed case.
  • Reflect on why you could or could not find failures.

Section 22 — Measuring progress We measure two things over time:

  • Frequency of asking the three‑line probe (attempt to reach 5 times/week).
  • Accuracy bootstrapping: after 4 weeks, we compare our expected base rate for a favored tactic vs observed outcomes.

A plausible target: after one month of practice, our probability estimates for success should move toward the observed success rate by at least 20% (e.g., if we initially estimated 60% and observed 25%, our revised estimate moves toward 25% by at least 7 percentage points). We will track this in Brali LifeOS.

Section 23 — Examples of wrong pivots and how to avoid them Someone might take survivorship bias lessons and stop innovating because most attempts fail. Avoid two wrong pivots:

  • Paralysis Pivot: stop trying anything new because most fail. Counter: adopt small bets; treat experiments as learning.
  • Dismissal Pivot: assume all success is due to luck and ignore good strategies. Counter: use failure reads to isolate replicable causal mechanics.

One explicit pivot from our work: We assumed asking for public postmortems would be straightforward → observed few public documents → changed to 1) reach out privately for anonymized summaries and 2) use forum searches for candid threads. That increased usable data tenfold.

Section 24 — A note on incentives Survivorship bias thrives because of incentives: media likes winners; founders like narratives; humans prefer stories. Align incentives: reward people for candid postmortems (e.g., in meetings, give kudos for honesty), create safe spaces for failure sharing, and model leadership by documenting our own failures.

Section 25 — Long‑term benefits Over months, the habit reduces four costly errors:

  • Overinvestment in low‑probability tactics
  • Underestimation of variance and tail risk
  • Miscalibrated hiring expectations
  • Repeating avoidable mistakes

If we save even 5% of monthly spending by better evaluating channels and hires, gains compound. For an organization spending $10,000/month on experiments, 5% savings = $500/month or $6,000/year — a simple ROI for a few hours of practice.

Section 26 — Final practice push (what to do in the next 48 hours)
We close with a single practical sequence you can run in the next two days.

48‑hour plan Day 1:

  • Morning: Apply the morning micro‑probe to one article (≤10 minutes).
  • Midday: Add two rows to a Survivorship Tally with attempts, outcomes, and time (≤15 minutes).
  • Evening: Tag one success story in Brali LifeOS for a future failure read.

Day 2:

  • Morning: If you have a decision this week, run the three‑line failure probe (≤20 minutes).
  • Afternoon: Schedule a 60‑minute Failure Read for this week and invite one colleague to join.
  • Evening: Enter the counts for the two metrics (Attempts, Minutes per attempt) in Brali LifeOS.

Section 27 — Check‑in Block (use in Brali LifeOS)
We include explicit Brali check‑ins to track the habit. Place this near the end of your setup in the app.

Daily (3 Qs):

Step 3

How did we feel after the probe? (choose: relief / frustration / curiosity / neutral)

Weekly (3 Qs):

Step 3

What is our current observed base rate for [key tactic]? (percent)

Metrics:

  • Attempts logged (count)
  • Minutes invested per attempt (minutes; round to nearest 5)

One simple alternative path for busy days:

  • Perform the 5‑minute procedure: name one failed case or mark as "insufficient counter‑evidence". Log it.

Section 28 — Closing reflections We have sketched a practice that is both minimalist and rigorous. The key is to move from story to count, from anecdote to base rate, and from intuition to a calibrated experiment. We accept trade‑offs: time for accuracy, extra steps for better decisions. We also accept limits: small samples remain noisy; some domain knowledge will still be required to judge relevance. The habit we advocate is not a guarantee of correct judgment, but it tilts our decisions away from illusions formed by visible winners and toward a fuller view that acknowledges the unseen many.

When we started, our assumption was that most people would not find failures publicly; we observed that private outreach and forums yield richer data. We changed how we collect evidence: quick surveys, concise tallies, and a weekly failure read. That structure doubled our ability to spot relevant failures in a month.

Do this work not because it is glamorous, but because it stops us from making the same expensive errors again. Even a 10% improvement in calibration over six months compounds into better hires, smarter experiments, and fewer wasted weeks.

Mini‑App Nudge (again, short)

  • Add the "Survivorship Reminder — 60s" card to Brali LifeOS. Use it before saving any success story to your reading list. It asks three quick questions and creates a tally row.

Check‑in Block (copy into Brali LifeOS)
Daily (3 Qs):

  • Did we apply the three‑line probe to at least one success story or decision today? (Yes/No)
  • How many attempts did we log today? (count)
  • How did we feel after the probe? (relief / frustration / curiosity / neutral)

Weekly (3 Qs):

  • How many failure cases did we review this week? (count)
  • Did the failure read change one decision? (Yes/No; if yes, note the change)
  • What is our current observed base rate for [key tactic]? (percent)

Metrics:

  • Attempts logged (count)
  • Minutes invested per attempt (minutes; round to nearest 5)

One simple alternative path for busy days (≤5 minutes):

  • Name one failed case or mark as "insufficient counter‑evidence" and save it.

Brali LifeOS
Hack #976

How to Train Yourself to Notice When You’re Focusing Only on Successful Outcomes and Ignoring Failures (Cognitive Biases)

Cognitive Biases
Why this helps
Forces us to seek missing data and measure base rates so decisions reflect the full population, not only visible winners.
Evidence (short)
In many sectors, observed success rates are low; e.g., startup failure rates are often 70–90% within a few years, showing winners are rare.
Metric(s)
  • Attempts logged (count)
  • Minutes per attempt (minutes)

Read more Life OS

How to When Avoiding a Decision: - List Pros and Cons: Write Down Potential Harm from (Cognitive Biases)

When avoiding a decision: - List pros and cons: Write down potential harm from acting versus not acting. - Ask yourself: "Am I avoiding action because it feels safer, or is it genuinely the better choice?" Example: Ignoring a conflict at work? Compare the outcomes of addressing it versus staying silent.

Cognitive Biases23 min read

How to Stay Sharp: - Take Notes: Write Down Key Points from the Person Speaking Before (Cognitive Biases)

To stay sharp: - Take notes: Write down key points from the person speaking before you. - Breathe and listen: Avoid rehearsing your own response while someone else is speaking. - Repeat mentally: After someone speaks, quickly repeat their main point in your head. Example: In a team meeting, note what the person before you says and reference it when it’s your turn.

Cognitive Biases1 min read

How to Recall Better: - Test Yourself Often: After Reading, Close the Book and Write Down (Cognitive Biases)

To recall better: - Test yourself often: After reading, close the book and write down what you remember. - Use flashcards: Create questions for key points and quiz yourself regularly. - Rewrite, don’t reread: Summarize content in your own words instead of passively reviewing it. Example: If studying for an exam, write down key concepts from memory rather than rereading the textbook.

Cognitive Biases1 min read

How to When Planning for the Future: - Acknowledge Change: Remind Yourself,

When planning for the future: - Acknowledge change: Remind yourself, "I will grow and change in ways I can’t predict." - Set flexible goals: Make plans that can adapt to future versions of yourself. - Reflect on past growth: Look at how much you’ve changed in the last five years as proof that growth is constant. Example: Five years ago, you might have had different priorities. Imagine how today’s plans could evolve just as much.

Cognitive Biases20 min read

About the Brali Life OS Authors

MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.

Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.

Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.

Contact us