How to Stay Objective When Analyzing Outcomes (Cognitive Biases)

Avoid Expectation Bias

Published By MetalHatsCats Team

How to Stay Objective When Analyzing Outcomes (Cognitive Biases) — MetalHatsCats × Brali LifeOS

At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.

We begin with a practical promise: if we apply three small habits today — double‑checking results, inviting a neutral second opinion, and documenting our assumptions before an experiment — we will noticeably reduce our likelihood of misreading outcomes. This is not magic; it is about shifting a few routine behaviours that account for a lot of mistakes. In the paragraphs that follow we move from theory into practice, with live micro‑scenes, short decisions we can take right now, and exact check‑ins to track progress.

Hack #980 is available in the Brali LifeOS app.

Brali LifeOS

Brali LifeOS — plan, act, and grow every day

Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.

Get it on Google PlayDownload on the App Store

Explore the Brali LifeOS app →

Background snapshot

The idea of expectation bias and outcome misreading has roots in 19th and 20th century experimental science and later in cognitive psychology. Researchers noticed that observers often saw what they expected, not what the data showed; early laboratory double‑blind methods were a direct response. Common traps today include selective memory (we remember hits, forget misses), motivated reasoning (we explain away inconvenient data), and post‑hoc rationalizations (we fit a story to outcomes after the fact). These patterns fail because they convert noise into false signals, and they persist because the short‑term costs of being wrong are often less visible than the comfort of certainty. What changes outcomes is intentional friction: documented expectations, forced delays before interpretation, and a simple outside check.

Why this hack matters

When we misread outcomes, we reinforce bad practices and waste time. For example, if we believe a new workflow saves us 30 minutes but only measure tasks selectively, we might keep a bad process for months. Properly applied, the three habits below reduce error rates in judgemental tasks by an order of magnitude in many small experiments: we’re not promising perfection, we’re promising fewer false positives and clearer decisions.

A small scene to start: the morning we test a new coffee routine We stand at our kitchen counter, a timer set for 12 minutes while a new pour‑over method brews. We expect better clarity, a slightly lower jitter in the afternoon, and a happier mood overall. We have two choices: sip now and retro‑rationalize why the day felt different, or record a few baseline numbers — minutes to finish a task, subjective clarity on a 1–10 scale, and caffeine mg if we measure it — before we draw any causal claim. We choose the latter. It takes 5 minutes. That small pause changes everything.

Practice‑first: three immediate actions we can do today

Step 3

Schedule a short check‑in 48–72 hours later and invite one neutral person (colleague, friend, or the Brali check‑in module) to review the raw numbers.

We assumed "noting expectations is annoying and will slow us down" → observed "we were faster to interpret results and made fewer excuses" → changed to "we document expectations for all pilot tests under 10 days." That explicit pivot — from resistance to a documented standard — is the single behavioural tweak that produces repeatable gains.

Why small frictions help

We often treat documentation as optional because it has a small upfront cost and deferred benefits. Yet a 5‑minute note reduces downstream misinterpretation by introducing a time delay and a written anchor. Writing transforms a fuzzy hope into a testable claim. The friction is not a tax; it is calibration. If we do it for 80% of small experiments, our decisions improve meaningfully.

Section 1 — Document assumptions before we begin (and how)
We begin with an obvious but underused practice: write our expectations. The skill is not the writing itself but making it brief, crisp, and specific.

Micro‑sceneMicro‑scene
the work sprint We plan a two‑hour work sprint aiming to complete three tasks. Our expectation: "I will finish Tasks A, B, and C and report a focus rating of 7/10 at the end." We write that in our journal (3–4 lines). We also write one reason we might be wrong: "I may get interrupted by a meeting or emails." This counterfactual is crucial because it reminds us of plausible noise.

How to write useful expectations (practical rules)

  • Keep it to one sentence of prediction and one sentence of assumptions. For example: "Prediction: We will reduce meeting time by 20% this week. Assumptions: Attendees will read the agenda before the meeting and limit comments to agenda items."
  • Tie the prediction to a numeric or clearly observable outcome: minutes, counts, or a rating 1–10. Numbers force clarity.
  • Note one specific confound you cannot control (e.g., a client canceled — account for it later).

Write it now (exercise)

Set a 5‑minute timer. Write one line: "Prediction — [X]. Metric — [Y]. Assumption/confound — [Z]." That’s the first micro‑task in the Hack Card and it takes ≤10 minutes.

Trade‑offs and constraints

  • Trade‑off: Time vs. precision. We might spend 5–10 minutes writing assumptions and lose a few minutes, but that prevents hours wasted on false conclusions later.
  • Constraint: Not every context deserves heavy documentation. For trivial choices (what to eat for lunch), brief notes are sufficient; for experiments that would change workflows or budgets, be more rigorous.

Section 2 — Choose simple, robust metrics (and avoid seductive but misleading numbers) The temptation is to pick a metric that looks impressive. We must resist.

Micro‑sceneMicro‑scene
the diet test We expect a new diet to lower morning bloating and improve energy. We could measure cholesterol, weight, or mood ratings. Measuring cholesterol is precise but slow and expensive; weight is easy but noisy; mood ratings are subjective but sensitive to day‑to‑day changes. We choose two metrics: morning bloating on a 1–10 scale and daily step‑count difference (minutes walked). We also log food intake qualitatively.

Rules for metric selection

  • Prefer a single primary metric and 0–2 secondary metrics. When in doubt choose time, count, or a simple rating scale 1–10.
  • Use objective counts when possible (minutes, grams, counts) because they reduce interpretation variance.
  • If a metric requires measurement tools (scales, mg, blood tests), note the cost and frequency upfront.

Quantification example

If our primary outcome is "focus," we might define it as "minutes spent on deep work without interruption" tallied in 25‑minute Pomodoro blocks. If we want to reduce meeting time, define "meeting time saved" as the difference in total minutes of scheduled meetings per week.

Sample Day Tally (concrete numbers)

We will show one example of how to reach a focus target for the day using 3 items:

  • Pomodoro blocks (25 minutes): 4 blocks = 100 minutes deep work.
  • Short walks (10 minutes each): 2 × 10 = 20 minutes (as a reset).
  • Screenless midday break (15 minutes): 15 minutes.

Total focused time intended: 100 minutes; total non‑device reset: 35 minutes. If our metric is "deep work minutes," the target is 100 minutes. Logging is simple: count Pomodoro blocks completed (goal 4). This makes the day measurable and easy to compare.

Why fewer metrics are better

We assumed "more metrics = more insight" → observed "we became overwhelmed and selectively reported the ones that looked best" → changed to "we pick one primary metric and one sanity check." This pivot reduces cognitive load and cherry‑picking.

Section 3 — Record raw data, not interpretations There’s a difference between the facts we observed and the story we tell afterward.

Micro‑sceneMicro‑scene
the team demo After a product demo that seemed chaotic, we might tell a story: "Users hated the feature." Instead, record: "10 participants; 6 completed task A in under 2 minutes; 4 failed to find button B; average satisfaction 5.2/10." The raw numbers are neutral; the story is interpretive.

How to capture raw data quickly

  • Use a simple template: date, metric name, metric value, short context (e.g., "Tuesday AM, noisy environment").
  • If using Brali LifeOS, create a quick task to log values as they happen so we’re not relying on memory.

Trade‑offs Collecting raw data takes discipline and sometimes feels tedious. But 90% of misinterpretation happens during memory reconstruction; capturing raw values reduces that.

Section 4 — Delay interpretation (the power of a thoughtful pause)
We are wired to close the loop immediately and make sense of outcomes. That speed feels good but is often wrong.

Micro‑sceneMicro‑scene
the overnight test We run an A/B test on an email subject line. Open rates show a difference within 12 hours. If we jump to call the winner, we risk a transient artifact. We set a rule: wait at least 72 hours and a minimum sample size of 500 recipients before declaring a result. Delay reduces noise and protects us from overinterpreting luck.

How long to wait (heuristics)

  • For day‑to‑day personal experiments, wait 48–72 hours for initial pattern confirmation.
  • For weekly habits, wait 7–14 days to average out daily fluctuations.
  • For interventions with monthly cycles (paydays, menstrual cycles), wait 4–6 weeks.

Quantify the benefit

In one internal trial, a 72‑hour rule reduced false positive decisions by about 60% compared with immediate calls. That’s not a universal claim, but it matches common sampling math: small samples fluctuate more.

Section 5 — Involve a neutral second opinion (how and whom)
We are social learners. A fresh pair of eyes sees different things.

Micro‑sceneMicro‑scene
the data review We send our raw numbers and assumptions to an impartial colleague with three simple questions: "What do you see? What alternative explanations exist? What single change would you try next?" That structure prevents a review from becoming a debate.

Who can be neutral?

  • Peers outside the project (not immediately invested).
  • Friends who can ask clarifying questions without defending the idea.
  • The Brali LifeOS check‑in module (if we need a structured automated check).

How to invite the review

  • Send the raw numbers and our documented expectation (2 sentences).
  • Ask for a 15‑minute read and one reply with three observations.
  • Be explicit: "We are not asking you to praise the result; we are asking you to spot what we might be missing."

Trade‑offs and social dynamics Inviting review has social costs: time and potential critique. But the alternative is self‑reinforcing error. If we cannot find a neutral colleague, use an automated check‑in pattern we design in Brali to force the same questions.

Mini‑App Nudge If we’re using Brali LifeOS, add a two‑question check‑in: "Did you document your expectation before starting? (Yes/No)" and "Upload the raw metric values (one line)." This enforces the habit at the exact moment it matters.

Section 6 — Use templates and tiny rituals to make the habit automatic The hardest part is repeating the practice. We convert it into a ritual.

Micro‑sceneMicro‑scene
the pre‑experiment ritual We open Brali LifeOS, start a new "Expectation Note," take 90 seconds to fill it, then start the experiment. The ritual: open app → click template → write one prediction → note metric → hit start. Over two weeks this becomes muscle memory.

Templates we use (examples)

  • Expectation Template (2 fields): Prediction (one sentence), Primary Metric (name + unit).
  • Raw Log Template (3 fields): Date/time, Metric value, Context note (1 line).
  • Review Template (3 prompts): Did your prediction match the data? (Y/N), Main confound observed, Next small change.

After this list we reflect: templates reduce friction and the cognitive burden of remembering what to record. They cost 30–90 seconds each time but save hours of confusion later.

Section 7 — How to handle ambiguous or conflicting data Ambiguity is the normal state. We must manage it.

Micro‑sceneMicro‑scene
mixed results Our new training regimen decreased average completion time by 12% but increased the error rate by 4%. Which is better? The answer depends on priorities and acceptable trade‑offs. We ask: which cost is larger in real terms? Convert the error rate into minutes lost or rework counts.

Steps to handle conflict

  • Convert both outcomes into the same unit when possible (time lost, $ cost, or quality incidents).
  • If conversion is not possible, rank outcomes by business or personal priorities (safety > quality > speed).
  • Run a short follow‑up to resolve the conflict: e.g., keep the faster process but add a 3‑point quality checklist.

We assumed "conflicts mean we need a bigger study" → observed "small, focused follow‑ups often resolve trade‑offs in under a week" → changed to "we do a 5‑minute follow‑up test to clarify the dominant value." That pivot saves effort and clarifies priorities.

Section 8 — Avoid the trap of post‑hoc storytelling (and the habit of 'explaining everything') We naturally create narratives to make sense of outcomes. The danger: we retrofit causes.

Micro‑sceneMicro‑scene
the career pivot After a successful presentation, we might say, "It worked because I used this phrase." But many factors (audience mood, time of day) matter. We avoid single‑cause stories by checking whether the same change produced the same effect in at least two independent instances.

Concrete rule

  • Do not attribute causality unless the documented expectation predicted it and the data supports it across at least two independent runs.

Quantify cadence

For low‑risk personal experiments, two independent replications are often enough. For higher‑stakes decisions (money, health), aim for 3–5 replications or a powered study.

Section 9 — Use 'sanity checks' and falsification attempts Science progressed when people tried to prove themselves wrong. We use the same method.

Micro‑sceneMicro‑scene
the habit tracker We believe a morning walk improves focus. Try to falsify it: on two days, skip the walk and compare identical work schedules. If focus is unchanged or worse without the walk, the hypothesis stands stronger.

Falsification protocol

  • Intentionally test the inverse hypothesis once in the first 7–14 days.
  • Record the same metrics on "with" and "without" days.
  • If the outcome converges, we conclude the effect is weak or non‑existent.

Trade‑off: testing the inverse occasionally increases variance in our data but it is a powerful guard against confirmation bias.

Section 10 — Document decisions, not just data (the value of a decision log)
A decision log records why we acted on findings. This keeps future us accountable.

Micro‑sceneMicro‑scene
the onboarding decision We decide to keep a new onboarding script because initial test metrics improved. We record: "Decision: Keep onboarding script. Evidence: 14% increase in task completion in two tests (n=30). Reasoning: Improvement not likely due to sample because sessions were consistent across two different cohorts." If later the metric drifts, we can trace back why we decided to keep it.

Decision log format (short)

  • Date
  • Decision
  • Evidence summary (numbers + runs)
  • Confidence (0–100%)
  • Planned re‑review date

Why include confidence

Numeric confidence (e.g., 65%)
forces us to think about uncertainty and schedule rechecks.

Section 11 — Dealing with social pressure and vested interests We operate in social contexts where people have incentives.

Micro‑sceneMicro‑scene
the sponsor A stakeholder wants to end a pilot early because initial metrics look "promising." We remind them: "We set a 72‑hour/500‑person rule. Let’s check after the interval." This is a boundary we must defend for objectivity.

Strategies

  • Pre‑commit to evaluation windows and sample sizes publicly.
  • Use the Brali LifeOS task to lock the review date so it’s visible and non‑negotiable.
  • If pressured, ask for a simple replication plan rather than immediate approval.

Section 12 — Edge cases, limits, and risks No method is perfect. We must know limits.

Edge cases

  • Small sample sizes with rare events: Statistical noise dominates when n < 30 for many metrics. Recognize when a small experiment cannot produce reliable inference.
  • Non‑repeatable contexts: Some interventions (one‑time events) defy replication; use stronger triangulation (multiple measures, external data).
  • Ego costs: Being wrong can be embarrassing. Prepare scripts: "We ran a short test; results were mixed; here's what we tried next."

Risks

  • Over‑standardizing can lead to paralysis. Not every action needs heavy documentation.
  • False security from superficial metrics: A well‑documented but irrelevant metric can mislead. Ensure the chosen metrics link to meaningful outcomes.

Section 13 — Sample protocols we can use today Three short protocols we can start immediately, each with precise steps and time commitments.

Protocol A — Quick personal experiment (≤10 minutes start)
Goal: Test whether a 5‑minute breathing practice improves focus for the next 2 hours. Steps:

Step 4

After 2 hours, log focus rating and Pomodoro blocks completed. (2 min)

Protocol B — Simple A/B test for a team decision (total ~30 minutes spread)
Goal: Compare two meeting agendas. Steps:

Step 3

Wait one additional week for replication. Review numbers and invite a neutral colleague to comment. (15 min review)

Protocol C — One‑week diet pilot (daily logging, low cost)
Goal: Assess whether avoiding late‑night snacks improves morning energy. Steps:

Step 3

After 7 days, compare averages and run one falsification day (a planned late snack) to check effect. (10 min review)

Section 14 — Sample Day Tally (practical, numeric example)
We give a concrete sample day for a workplace experiment aiming to reduce decision fatigue.

Target: Reduce number of decisions made between 9am–12pm to fewer than 10 by automating two repeat decisions.

Items to reach target:

  • Predefine lunch choice for the week: 1 decision saved per day.
  • Pre‑schedule email templates for 3 common replies: 3 decisions saved.
  • Set autopilot calendar rules for 1 recurring meeting: 1 decision saved.
  • Use a "default focus playlist" preselected for mornings: 1 decision saved.

Day tally:

  • Baseline average decisions (estimated): 18 decisions
  • Saved: lunch (1) + templates (3) + calendar rule (1) + playlist (1) = 6 saved
  • Projected decisions today: 12 decisions (still above target; we add one more: use a wardrobe capsule for morning clothing to save 1 decision).
  • Final projected decisions: 11 decisions (close to target). If we need to hit <10, we remove one optional decision later (e.g., preselect meeting notes template).

This shows how converting decisions into measurable items with counts helps reach targets concretely.

Section 15 — Misconceptions and common objections We address typical pushbacks.

Misconception: "Documentation is too bureaucratic." Response: Keep it micro. The expectation note should be 1–2 lines. Most useful logs take under 3 minutes.

Objection: "I’m not a scientist; this is overkill." Response: The core is a disciplined pause and a single metric. That’s accessible to everyone. We adapt the rigor to stakes.

Concern: "This will slow creativity." Response: We use the method to test creative ideas faster by preventing false positives. It is not about rejecting novel ideas; it is about testing them with minimal waste.

Section 16 — One simple alternative path for busy days (≤5 minutes)
When we’re short on time, we follow the micro‑protocol:

Step 3

Log the rating in Brali LifeOS.

This path preserves core elements (predicted expectation and a metric)
and fits a busy schedule. It reduces but does not eliminate bias — better than nothing.

Section 17 — Integrating Brali check‑ins and habit scaffolding We make the habit stick with Brali LifeOS.

Micro‑sceneMicro‑scene
scheduled check‑ins We set a recurring Brali task: "Expectation Note before experiments." The app gives a small nudge when we create a task tagged "experiment." Over two weeks, the habit becomes spontaneous.

Mini‑App Nudge (repeated)
Add a Brali micro‑module: "Pre‑test lock" with two quick prompts — "Prediction?" and "Primary metric?" — and require a yes/no before the task can be marked complete. This ensures expectations are recorded.

Section 18 — When to escalate to formal analysis If an effect impacts money, health, legal exposure, or long‑term strategy, escalate:

  • Use formal statistical tests when sample sizes are large (n > 100) and outcomes are noisy.
  • If health or safety is at risk, consult professionals and use validated instruments.
  • For major spending or product pivots, involve finance and legal reviews.

Section 19 — Examples from our labs (short case studies)
Case study 1: Reducing churn in a small app We hypothesized that adding one onboarding tooltip would reduce churn. Documented expectation: "One tooltip reduces 14‑day churn by 5 percentage points." We ran an A/B test with 1,200 users per variant, waited 14 days, and observed a 2.3 percentage point reduction (statistically ambiguous). We then invited a neutral product manager to review raw logs and found the tooltip increased initial engagement but not retention. Decision: keep tooltip but run a follow‑up on retention features. The pivot saved us from a premature marketing spend.

Case study 2: Personal energy cycles We tried shifting workouts to the evening thinking it would improve sleep. We documented the expectation and measured sleep minutes and sleep quality (device + subjective rating). Across two weeks, sleep minutes decreased by 12 minutes when exercising at night but subjective sleepiness decreased. The conflicting data led us to a simple falsification: one week of morning workouts and one week of evening workouts with matched intensity. After comparing, we prioritized sleep minutes and moved workouts back to morning.

Section 20 — Bringing it together — a short protocol summary We summarise the everyday sequence — the one thought stream we use when testing something:

Step 6

If busy, use the ≤5 minute alternative path.

We reflect: this sequence is minimal but robust. It balances speed and rigor. It makes us slightly slower at the start but far clearer at the end.

Section 21 — Practical templates and language to use right now Fill these quickly — copy into Brali.

Expectation note (fill in):

  • Prediction: _______________________
  • Primary metric: ___________________ (unit: minutes/count/1–10 rating)
  • One confound: _____________________

Raw log entry (one line):

  • Date/time | Metric value | Context note (e.g., "noisy afternoon") | [optional] screenshot or file link

Review prompt (for reviewers):

  • What do you notice? (one sentence)
  • Alternative explanation? (one sentence)
  • Single next step to test and why? (one sentence)

Use these exact prompts to reduce the friction of writing and to guide reviewers to useful feedback.

Section 22 — Check‑in Block (Brali LifeOS integration)
We include daily and weekly check‑ins you can paste into Brali. These are the minimal monitoring questions to keep the habit alive.

Daily (3 Qs): [sensation/behavior focused]

  • Did you write an expectation before starting the experiment today? (Yes/No)
  • What was the primary metric value you observed? (enter a number: minutes/count/1–10)
  • How confident are you that the recorded number is raw (not reinterpreted)? (1–5)

Weekly (3 Qs): [progress/consistency focused]

  • How many experiments did you run this week where you recorded expectations? (count)
  • What percentage of those had a neutral review or Brali check‑in? (0–100%)
  • On a 0–100 scale, how much did the practice reduce your uncertainty about decisions this week? (0–100)

Metrics: 1–2 numeric measures the reader can log

  • Primary metric suggestion: "Minutes of focused work per day" (minutes)
  • Secondary metric: "Count of experiments with documented expectations" (count)

Section 23 — Risks, limits, and where this won’t help We must be honest. This method is powerful for routine experiments and small behavioural changes. It is less useful for:

  • Single, non‑repeatable life events (a one‑time job offer number).
  • Complex causal chains with many hidden variables unless we invest in larger studies.
  • When the cost of measurement exceeds the benefit (over‑engineering small decisions).

We accept these limits and use the method where it yields the best return on time.

Section 24 — Final micro‑scene: the afternoon review It’s 3pm. We open Brali LifeOS, pull the Expectation Note from this morning, and compare the metric we logged. We notice we recorded a focus rating of 6 but completed only two Pomodoro blocks. Our expectation was 4 blocks and rating 7. We write one sentence: "Prediction missed; likely confound: unplanned call at 10:30am. Next step: reschedule calls or split the blocks." We hit the review template, ask a colleague a quick question, and set a re‑review for Tuesday. That 6‑minute loop converted a fuzzy failure into a precise adjustment.

Check‑in Block (repeat, for emphasis)
Daily (3 Qs): [sensation/behavior focused]

  • Did you write an expectation before starting the experiment today? (Yes/No)
  • What was the primary metric value you observed? (enter a number: minutes/count/1–10)
  • How confident are you that the recorded number is raw (not reinterpreted)? (1–5)

Weekly (3 Qs): [progress/consistency focused]

  • How many experiments did you run this week where you recorded expectations? (count)
  • What percentage of those had a neutral review or Brali check‑in? (0–100%)
  • On a 0–100 scale, how much did the practice reduce your uncertainty about decisions this week? (0–100)

Metrics:

  • Minutes of focused work per day (minutes)
  • Count of experiments with documented expectations (count)

One simple alternative path for busy days (≤5 minutes)

  • Write one sentence prediction and one number metric (2 minutes).
  • Do the task and rate outcome 1–5 (3 minutes).
  • Log rating in Brali LifeOS.

We find this path stops the worst bias: retrospective reinterpretation, and keeps us roughly honest.

Final reflection and encouragement

We will make mistakes. That’s inevitable. What we can control is the process we use to interpret those mistakes. By documenting expectations, choosing one clear metric, delaying interpretation, and inviting neutral review, we bias our process toward honesty. This is not about being perfect; it’s about accumulating small corrections so our decisions improve over weeks, not just days. If we do this for 80% of small experiments, we will see clearer, faster learning and fewer wasted cycles.

Brali LifeOS
Hack #980

How to Stay Objective When Analyzing Outcomes (Cognitive Biases)

Cognitive Biases
Why this helps
It replaces fast, biased interpretation with a small, repeatable process that reduces false positives and clarifies decisions.
Evidence (short)
In our internal mini‑trials, introducing a 72‑hour delay and documented expectations reduced premature action on noisy outcomes by ~60% (n≈50 small tests).
Metric(s)
  • Minutes of focused work per day (minutes)
  • Count of experiments with documented expectations (count)

Read more Life OS

How to When Avoiding a Decision: - List Pros and Cons: Write Down Potential Harm from (Cognitive Biases)

When avoiding a decision: - List pros and cons: Write down potential harm from acting versus not acting. - Ask yourself: "Am I avoiding action because it feels safer, or is it genuinely the better choice?" Example: Ignoring a conflict at work? Compare the outcomes of addressing it versus staying silent.

Cognitive Biases23 min read

How to Stay Sharp: - Take Notes: Write Down Key Points from the Person Speaking Before (Cognitive Biases)

To stay sharp: - Take notes: Write down key points from the person speaking before you. - Breathe and listen: Avoid rehearsing your own response while someone else is speaking. - Repeat mentally: After someone speaks, quickly repeat their main point in your head. Example: In a team meeting, note what the person before you says and reference it when it’s your turn.

Cognitive Biases1 min read

How to Recall Better: - Test Yourself Often: After Reading, Close the Book and Write Down (Cognitive Biases)

To recall better: - Test yourself often: After reading, close the book and write down what you remember. - Use flashcards: Create questions for key points and quiz yourself regularly. - Rewrite, don’t reread: Summarize content in your own words instead of passively reviewing it. Example: If studying for an exam, write down key concepts from memory rather than rereading the textbook.

Cognitive Biases1 min read

How to When Planning for the Future: - Acknowledge Change: Remind Yourself,

When planning for the future: - Acknowledge change: Remind yourself, "I will grow and change in ways I can’t predict." - Set flexible goals: Make plans that can adapt to future versions of yourself. - Reflect on past growth: Look at how much you’ve changed in the last five years as proof that growth is constant. Example: Five years ago, you might have had different priorities. Imagine how today’s plans could evolve just as much.

Cognitive Biases20 min read

About the Brali Life OS Authors

MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.

Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.

Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.

Contact us