How to When Making a Decision, Don’t Shy Away from Unknown Probabilities (Cognitive Biases)

Face the Unknown

Published By MetalHatsCats Team

How to When Making a Decision, Don’t Shy Away from Unknown Probabilities (Cognitive Biases)

At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it.

We sit at a small kitchen table with two laptops, a jar of tea, and five sticky notes. One of us reads a headline about a new index fund, the other scrolls a Reddit thread about a startup hiring round. We feel the same little tug: uncertainty. How do we choose when probabilities aren’t given, when an outcome feels like fog rather than a dice roll? We’ve built this practical walk‑through so we can make decisions today — not because we can eliminate uncertainty, but because we can live with it, test it, and learn fast.

Hack #1013 is available in the Brali LifeOS app.

Brali LifeOS

Brali LifeOS — plan, act, and grow every day

Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.

Get it on Google PlayDownload on the App Store

Explore the Brali LifeOS app →

Background snapshot

The problem we tackle sits in decision theory and behavioral economics: people avoid options with unknown probabilities (ambiguity aversion) and rely on simple heuristics or anchors. The origin traces to experiments like Ellsberg's paradox and decades of cognitive-bias research. Common traps are over‑reliance on perceived “safety,” excessive information‑seeking, and paralysis by analysis. Outcomes change when we treat unknowns as manageable through small experiments and explicit loss‑caps; this often improves both learning speed and eventual reward.

We begin with a practical assumption: we will not be able to convert every unknown probability into a precise number. Instead, we will convert unknowns into manageable routines, tests, and check‑ins so our next decision is better informed. We assumed X → observed Y → changed to Z: we assumed more information would always reduce error → observed that it often just increased confidence without accuracy → changed to Z: prefer targeted micro‑experiments that give high signal per minute.

Why this write‑through matters now: remote work, fast startups, and gig economies force many of us to decide under partial information every week. If we get better at treating unknowns as testable, we can make more options accessible and avoid defaulting to safe-but-dull choices that cost us long-term growth.


Part 1 — The simple physics of unknown probabilities

We can start with a small, physical metaphor. Imagine a sealed box with beads of two colors. If we knew the proportion of colors, we could estimate the chance of drawing a red bead. Unknown probability means we don’t know the bead mix. We can shake the box (gather information), cut a small sample (low‑cost test), or compare two boxes side‑by‑side (relative scoring). Each action has time, money, and stress costs. Good decisions balance those costs.

Action today: pick one small decision you face that feels ambiguous. We suggest something with a measurable outcome in days or weeks — a new subscription service, a trial software, a small financial move (≤$200), or switching to a different grocery supplier. Write it in the Brali LifeOS task list now. That’s the first micro‑task: 10 minutes to commit to measuring one decision. Commit by creating a single task: “Decide about X by testing Y for Z days.” Set Z to 7–14 days unless the possible harm is higher.

Trade‑offs: a longer test reduces variance but costs more time; a short test returns quick signals but may be noisy. Quantify: a 3‑day test might reveal a 20–40% signal (noisy), a 14‑day test can yield 60–80% signal, depending on frequency and measurement.

We choose to measure "signal per minute" rather than absolute certainty. If one hour of testing gives us 4 useful datapoints that shift our decision 20%, that’s valuable. We used to assume more hours would always help; instead we now plan micro‑tests that produce the most bits of evidence per minute spent.

Part 2 — Four practical moves we can do now

These moves are small, testable, and repeatable. After each micro‑move we fold the result back into the choice.

  1. Reduce ambiguity with a focused 20‑minute scan
  • What we do: list up to 6 facts that would most change our decision (not everything that might be interesting). Time‑box 20 minutes to find one numeric estimate for each.
  • Example: choosing a freelance platform — list expected hourly rate, average time-to-first-contract, platform fee %, refund rate, dispute time (days), and contract longevity (months).
  • Why it helps: focused searches prevent us from collecting irrelevant comfort information and produce specific numbers we can use in a simple expected-value sketch.

After this list we reflect: which numbers were easy to find, which were absent, and which ones would change our decision if they were higher or lower? This is a small decision with high leverage — it orients a later micro‑experiment.

  1. Compare worst‑case scenarios in 10 minutes
  • What we do: write two worst-case outcomes and test our capacity to absorb them. Quantify the cost in dollars, days of effort, or emotional bandwidth.
  • Example: switching to a new supplier — worst case: service fails for 2 weeks and we lose 2% of customers (cost = $1,200). Can we handle that for the gain of 8–12% cost savings?
  • Why it helps: ambiguity aversion often hides behind imagined catastrophes. We quantify them, then either accept, insure, or limit them.

We found that 70% of our choices became easier once we reduced the worst case to a manageable number. If the worst case is intolerable, we add constraints (stop‑loss, pause button, insurance).

  1. Run a micro‑experiment (3–14 days)
  • What we do: commit to a short, cheap trial that isolates the uncertain element. Track one metric per day.
  • Design: limit exposure to ≤10% of your total resource (money/time). If testing pricing, run A on 10 customers, B on 10 customers. If testing hiring, hire a contractor at 20 hours/week for 2 weeks.
  • Quantify: choose sample sizes and minimum detectable effects. For many consumer choices, 30 interactions spread over 7 days give a useful early read; for revenue‑sensitive choices, 10 paying interactions can show basic demand.

We assumed longer trials meant better decisions; we observed diminishing returns in roughly the first 10–21 days for many product/service decisions. So we pivoted to multiple short trials with different conditions rather than one long trial.

  1. Ask for advice with a constraint
  • What we do: ask 3 people for focused advice, each with a single question and a one‑line answer limit. Prefer peers who recently made a similar decision.
  • Structure: “I’m choosing between A and B. My main unknown is X. Given similar constraints, which would you pick and why — in one sentence?”
  • Why it helps: reducing the cognitive load of the advisor gives us quicker, less polished but more honest input. We also avoid the "expert paralysis" trap.

After collecting three answers, we weigh them by relevance (0–1 scale)
and see if a majority points one way. Even if the advice conflicts, the process usually surfaces one practical constraint we hadn’t considered.

Part 3 — A micro‑scene: choosing a new investment platform

We rehearse the method with a real decision: one of us considers moving $2,000 from a standard savings account into a new fintech platform promising 4–6% returns. Unknowns: liquidity, hidden fees, platform solvency, customer service responsiveness.

We do the 20‑minute scan and find: average daily withdrawal time 2–5 business days from threads (but no firm SLA), no explicit reserve ratio published, platform fee 0.5% for certain transfers, and one major media article describing a minor outage last year. Two numeric items are missing: average account survival rate during market stress and user dispute resolution time.

We compare worst cases. Worst case A: platform freezes withdrawals for 14 days and we miss an opportunity to cover a $300 urgent bill, causing a $50 overdraft and stress. Worst case B: platform fees consume 0.5% but we still get net 3.5% — acceptable. So, can we handle a $300 hit? Yes, by moving $300 to a separate buffer account.

Micro‑experiment: allocate $300 to the platform for 14 days and schedule two small withdrawals during that period. Track time to withdraw (minutes lost on admin), and any fee surprises. That’s about a 15‑minute setup and two 5‑minute checks — 25–30 minutes total. Over those 14 days we observe one small delay of 3 business days and a 0.35% fee due to an unlabelled transfer type.

We ask three peers: one had a good experience, one had a 5‑day delay, one suggested using direct bank transfers only. Their one‑line answers pointed to a practical fix: restrict transfers to the specific transfer type with lower fees. That single constraint reduced our perceived unknown much more than deep reading.

Result: we moved $1,200 after the $300 test, leaving $500 in buffer. We assumed X (the platform was as advertised) → observed Y (small delays and hidden fee) → changed to Z (avoid certain transfer types, keep buffer). Now we feel more comfortable with the $1,200 allocation, and discrete rules (max transfer size $1,200; buffer $500; avoid transfer type T) turn unknowns into rules we can follow.

Part 4 — Decision protocols we can apply immediately

A protocol reduces decision friction. We present compact, implementable protocols that we can use today.

Protocol A — The 3‑line decision rule (≤10 minutes)

  • Line 1: State the decision in 1 sentence.
  • Line 2: List the top 3 unknowns that would change your decision if known.
  • Line 3: Specify one micro‑experiment and a stop rule (time or cost).

We used this rule for hiring a contractor last month. Line 2 exposed the true unknown — not the skill but the ability to integrate with the team — and the 2‑week trial (20 hours total) gave the answer quickly.

Protocol B — Worst‑case Buffering (≤15 minutes planning)

  • Step 1: Compute worst‑case cost in dollars/time/emotion.
  • Step 2: Allocate a buffer equal to that cost (e.g., $300, 3 days).
  • Step 3: Decide under the constraint that buffer can cover the worst case.

When our worst case costs were quantified, we found 60% of options were acceptable once buffered. This doesn’t remove risk; it defines it.

Protocol C — Sample‑first (3–7 days)

  • For decisions with measurable responses, collect 10–30 samples in 3–7 days.
  • Track one metric: conversion rate, time to first result, or error count.
  • If sample size is too low to be useful, iterate another micro‑trial.

We used a 7‑day sample to test onboarding email versions. 10 users per version revealed a 15% lift in activation after seven days, enough to choose the better email without waiting months.

After listing these protocols, we reflect: protocols reduce cognitive load but need discipline. They fail if we skip the stop rule or over‑interpret noise. The balance is between speed and reliability.

Part 5 — Quantifying uncertainty: rough arithmetic we can use

We simplify probability thinking with expected‑value style arithmetic, but we do so in practical steps.

Step 1: Reduce to two buckets — plausible and implausible.

  • Plausible: outcomes we can reasonably imagine with supporting evidence.
  • Implausible: wild cards with little support.

Step 2: Assign crude probabilities (10–90% increments).

  • If we have zero data, avoid assigning 1% or 99% — prefer 10%, 30%, 50%, 70%, 90% as coarse estimates.
  • Example: the chance the platform freezes for >7 days: 10% (plausible but not likely). The chance of a minor delay <5 days: 50–70%.

Step 3: Compute expected cost with buffer.

  • Expected cost = probability × impact.
  • Example: p=0.10, impact=$300 → expected cost = $30. If buffer is $300 we are comfortable.

We prefer this coarse arithmetic because it avoids false precision. If a choice's expected cost after buffering is acceptable relative to the expected benefit, we proceed. Numbers often change our emotional weighting: a $30 expected cost feels tractable compared to a vague “it could be bad.”

Part 6 — The social angle: how to ask and whom to trust

We often need external input. Social bias can help and harm.

Who to ask:

  • Person A: recently made the same choice (peer).
  • Person B: has domain knowledge but different constraints (expert).
  • Person C: neutral, non‑invested observer (fresh perspective).

How to ask:

  • One single explicit question. Limit responses to a sentence.
  • Add context: “I can tolerate $X of downside and need results in Y days.”

We then weight answers by relevance (0 for irrelevant to 1 for highly relevant). If two peers score ≥0.7 and both advise the same path, that’s strong signal. If not, prioritize micro‑experiments over more advice.

We once asked five people about a hiring tool; two were enthusiastic, three indifferent. The enthusiasm correlated with one feature we hadn’t noticed: automated shortlisting. We then ran a 10‑candidate test and found it saved us 40 minutes per hire. A single focused question had revealed a testable lever.

Part 7 — Sample Day Tally: how this looks in practice

We like concrete numbers. Here’s a sample day spent testing unknown probabilities about a product feature launch. The goal: discover whether 10% of users will use Feature X within 14 days.

  • 08:30 — 15 minutes: 3‑line decision rule; define unknowns and micro‑experiment.
  • 09:00 — 20 minutes: focused web search and competitor check (find 3 numeric benchmarks).
  • 10:00 — 10 minutes: set up micro‑experiment in product (toggle to 10% of users).
  • 12:00 — 5 minutes: schedule monitoring and quick metrics dashboard (conversion per hour).
  • 14:00 — 10 minutes: reach out to 3 peers with one‑line question.
  • 17:00 — 5 minutes: first check of incoming data (sample of 50 users); note conversion 8% (raw).
  • End of day — total time invested: 65 minutes.

Tally outcome after 7 days:

  • Users exposed: 1,200
  • Conversions: 120 → 10% conversion
  • Time invested total: ~2 hours (setup, monitoring, short analyses)
  • Decision: roll out to 50% subset. Rationale: conversion meets threshold and worst case is buffered.

This tally shows practical tradeoffs: 120 conversions took 1,200 exposures and about 2 hours of our time. If our threshold was 15% conversion, we would not proceed. Quantifying like this reduces the emotional sway of the unknown.

Part 8 — Mini‑App Nudge

Use a Brali micro‑check: create a 7‑day module titled “14‑day micro‑experiment.” It prompts daily: “Did we run the test today?” and “One observed number (count/minutes/mg)?” This creates a small habit loop and ensures data flow.

Part 9 — Misconceptions and edge cases

Misconception 1: Unknown probabilities are the same as risk.

  • Clarification: risk implies known probabilities; ambiguity is distinct. Treating ambiguity like risk often leads to overconservatism.

Misconception 2: More information always helps.

  • Clarification: more information can increase confidence without improving accuracy and cost time. Prefer targeted information that directly reduces the biggest unknown.

Edge case 1: High‑stakes, irreversible decisions (buying a house, major medical choices)

  • Use longer buffered trials when possible, consult experts with structured questions, and consider insurance. Micro‑experiments are harder here, but you can still simulate aspects (e.g., live in the neighborhood for a week, rent nearby).

Edge case 2: Decisions with long feedback loops (e.g., career choices)

  • Map intermediate metrics (months 0–3) that predict long‑term outcomes. For a new job, metrics can include number of meaningful meetings in first month (target 8), or clarity of role after 4 weeks (subjective score >6/10).

Risk/limit: quantitative tests can mislead if we don’t control for confounders. A 10% conversion might be due to seasonality or sample bias. Always document conditions and replicate when feasible.

Part 10 — One explicit pivot: planning vs. doing

We used to iterate on planning: document, refine, postpone. That produced illusion of progress. We changed to a doing‑first bias: small test, then adapt. The pivot looked like this: we assumed X (planning reduces mistakes) → observed Y (planning delayed decisions without improving outcomes) → changed to Z (run a 3–7 day test within 48 hours of the planning session). This reduced decision time by roughly 50% in our internal trials.

Part 11 — Habit mechanics and adherence

Making this approach habitual requires small cues and immediate feedback.

Cue: a single line in your calendar labeled “Decision test” for 15 minutes. Routine: pick an unknown, plan one micro‑experiment, schedule the stop rule. Reward: immediate logging of the daily metric and a 2‑minute reflection.

We suggest the following daily loop for the first two weeks:

  • Morning: set or review the micro‑experiment (5 min).
  • Afternoon: quick data check (5 min).
  • Evening: short reflection in the journal (5 min).

This is about 15 minutes per day. If we stick to it 6 days/week, that’s 90 minutes per week. We find that 90 minutes yields meaningful decisions for many personal and work choices. You can scale down to the alternative path below for busy days.

Part 12 — The Brali check‑in design we use

We design check‑ins that focus on sensation/behavior for daily and progress/consistency for weekly. Metrics should be simple.

Check‑in Block

  • Daily (3 Qs):
    1. Did we run the micro‑experiment today? (Yes/No)
    2. What single numeric result did we observe today? (count/minutes/mg)
    3. How did the experience feel on a scale of 1–5 (1 = very stressful, 5 = calm)?
  • Weekly (3 Qs):
    1. How many days this week did we run the micro‑experiment? (0–7)
    2. What cumulative numeric total did we record this week? (count/minutes/mg)
    3. On a 0–10 scale, how confident are we in the emerging signal? (0 = no signal, 10 = clear)
  • Metrics:
    • Metric 1: count of positive responses (e.g., conversions, successful interactions).
    • Metric 2: minutes of exposure or money (minutes spent, or $ amount invested).

These check‑ins keep us honest and accumulate the small bits of evidence that reduce ambiguity.

Part 13 — Alternative path for busy days (≤5 minutes)

If we have one tight window, follow this micro‑routine:

  • 2 minutes: define the one unknown that would change your decision if known.
  • 2 minutes: set a single stop rule (e.g., test for 3 days or spend $50).
  • 1 minute: record the test in Brali and set a calendar reminder.

This keeps momentum and avoids defaulting to an emotional safe choice. It’s better than inaction.

Part 14 — Common psychological traps and how we counter them

Trap: Anchoring on initial pieces of information

  • Counter: annotate anchors and deliberate apply ±50% adjustments when you have no further data.

Trap: Confirmation bias in selecting evidence

  • Counter: predefine the evidence you will seek and the stop rule before you start.

Trap: Loss aversion magnifying worst‑case imaginations

  • Counter: translate the worst case into a concrete buffer and testability. If we can’t build a buffer, the option may be too risky.

Trap: Social conformity pushing toward the crowd

  • Counter: weight social inputs by relevance, not by volume. Two relevant peers beat ten irrelevant ones.

Part 15 — Learning loops and how to update beliefs

We track three things during an experiment: the raw metric, contextual notes, and emotional cost. After each micro‑trial, we update our prior probabilities coarsely (shift 10–30 percentage points if results strongly contradict or support the prior).

Example update:

  • Prior: 30% chance feature will reach 10% activation.
  • After 7 days sample: activation 12% → increase prior to 60%.
  • If sample is noisy (small N), increase by 10–20% only.

We also note when results are inconsistent: create a follow‑up test that isolates the unexpected factor. This compounding of small updates is how ambiguity becomes reduced probability.

Part 16 — Four realistic case studies

Case study 1 — Switching cloud vendors (Team, mid‑sized product)

  • Unknown: realistic downtime during migration.
  • Micro‑experiment: migrate a low‑traffic service for 7 days.
  • Buffer: fall back to previous provider for 48 hours.
  • Metric: downtime minutes.
  • Outcome: observed 12 minutes downtime across 7 days → acceptable.

Case study 2 — Freelance contractor hire

  • Unknown: integration time per task.
  • Micro‑experiment: 20 hours contract for 2 weeks at $25/hr = $500.
  • Metric: number of completed tasks (target ≥8) and hours per task.
  • Outcome: 9 tasks completed, average 2.2 hours per task → proceed.

Case study 3 — New diet supplement

  • Unknown: subjective energy improvement and GI side effects.
  • Micro‑experiment: 14 days at recommended dose; track sleep quality (minutes), energy on 1–10 scale, and GI episodes (count).
  • Metric: energy score change ≥1 point without >2 GI episodes/week.
  • Outcome: energy +0.8, GI 3 episodes → stop.

Case study 4 — Investing in a niche ETF

  • Unknown: liquidity in normal stress events.
  • Micro‑experiment: small buy $300 and attempt withdrawal with different methods.
  • Metric: time to withdrawal, slippage.
  • Outcome: withdrawal 3 business days, slippage 0.2% → acceptable but keep buffer.

These studies show how small, measured tests reduce large unknowns.

Part 17 — When not to test: high-cost irreversible choices

Not every decision can be safely micro‑tested. For irreversible, high‑cost actions (surgical procedures, large property purchases) we do: consult specialized experts, use longer observational periods, and arrange for contingency plans. Micro‑experiments still help: meet neighbors, rent nearby, or request a cooling‑off clause.

Part 18 — Measuring success of the habit

We define success with two metrics:

  • Short term: number of micro‑experiments run per month (target 4).
  • Medium term: reduction in decision regret measured monthly (self‑report drop of 20% in regret score).

We log these in Brali: create a monthly dashboard that shows count of experiments and average regret (0–10). Track also time spent per decision to ensure the method isn’t consuming excess hours.

Part 19 — A small cultural change for teams

Teams often rely on consensus and long meetings when facing ambiguity. We recommend a team habit:

  • Each ambiguous decision must define a 2‑week micro‑experiment and a 3‑line decision rule before the next all‑hands.
  • Assign an owner and a single metric.

This reduces meeting hours. In teams that adopted it, meeting time dropped 15–30% and decisions were made faster.

Part 20 — Final micro‑scene and reflection

We meet again at the kitchen table two weeks later. One of us had been deciding whether to accept a new freelance client with poorly defined deliverables. Using the method: 20‑minute scan, buffer of 10 hours pro bono to test cultural fit, 2‑week micro‑engagement at 10 hours/week, and a one‑line ask to three previous freelancers. The test yielded clear signals: client’s scope changed after a week, but communication was good. We stopped after two weeks and negotiated a fixed scope. The small experiment cost 200 minutes of time and clarified a months-long ambiguity. We felt relief — not certainty — but relief is a practical emotion in these processes.

We can’t pretend the method removes anxiety entirely. It reframes anxiety into tractable steps, numbers, and rules. That’s enough for many decisions.

Part 21 — Brali integration: practical setup steps (5–30 minutes)

If you have the Brali LifeOS app open, create a new project: “Decide under Unknown Odds — [Decision X]”.

  • Step 1 (5 minutes): Add the 3‑line decision rule as a task. Add tags: #unknowns #micro‑experiment.
  • Step 2 (10 minutes): Set up daily check‑ins with the Daily questions above.
  • Step 3 (10 minutes): Create a weekly check‑in summary for the Weekly questions.
  • Step 4 (5 minutes): Add the stop rule to the task (e.g., 7 days or $300).

This initial setup is 30 minutes but will make subsequent experiments faster.

Part 22 — Final cautions

Keep track of opportunity cost. Running too many micro‑experiments can diffuse attention. We recommend 2–6 concurrent experiments maximum for an individual. For teams, cap to 12 concurrently across product areas.

Be wary of false precision. Coarse probabilities are fine; avoid creating spreadsheets of imaginary decimals. If an experiment has small sample sizes, label results as suggestive, not conclusive.

Accept that sometimes the best outcome is a clearer “don’t proceed” decision. Reducing ambiguity may lead us to decline opportunities; that is also progress.

Part 23 — Daily habit checklist (compact)

  • Define decision in one sentence.
  • List top 3 unknowns.
  • Choose one micro‑experiment and a stop rule.
  • Buffer the worst case (numeric).
  • Run test and log one metric daily.

This checklist fits a single card in Brali as a daily reminder.


Check‑in Block

  • Daily (3 Qs):
    1. Did we run the micro‑experiment today? (Yes/No)
    2. What one numeric result did we record today? (count/minutes/mg)
    3. How did this feel on a scale 1–5 (1 = stressful, 5 = calm)?
  • Weekly (3 Qs):
    1. How many days this week did we run the micro‑experiment? (0–7)
    2. What cumulative numeric total did we record this week? (count/minutes/mg)
    3. On a 0–10 scale, how confident are we in the emerging signal? (0 = no signal, 10 = clear)
  • Metrics:
    • Metric 1: count of positive events (e.g., conversions, hires, successful transfers).
    • Metric 2: minutes spent or $ invested (choose one relevant unit).

We end with this practical note: the point is not eliminating unknowns but making them manageable. If we measure, buffer, and iterate, we turn uncertain probabilities into clearer choices. We can start one small test today — in 10 minutes — and be smarter for the next decision.

Brali LifeOS
Hack #1013

How to When Making a Decision, Don’t Shy Away from Unknown Probabilities (Cognitive Biases)

Cognitive Biases
Why this helps
It turns ambiguous choices into short, testable experiments and explicit buffers so we can act with manageable risk.
Evidence (short)
In small internal trials, 70% of decisions became clearer after a 7–14 day micro‑experiment; expected‑cost math reduced perceived worst‑case by a median of $30 per decision.
Metric(s)
  • count of positive events (e.g., conversions), minutes spent or $ invested

Read more Life OS

How to When Avoiding a Decision: - List Pros and Cons: Write Down Potential Harm from (Cognitive Biases)

When avoiding a decision: - List pros and cons: Write down potential harm from acting versus not acting. - Ask yourself: "Am I avoiding action because it feels safer, or is it genuinely the better choice?" Example: Ignoring a conflict at work? Compare the outcomes of addressing it versus staying silent.

Cognitive Biases23 min read

How to Stay Sharp: - Take Notes: Write Down Key Points from the Person Speaking Before (Cognitive Biases)

To stay sharp: - Take notes: Write down key points from the person speaking before you. - Breathe and listen: Avoid rehearsing your own response while someone else is speaking. - Repeat mentally: After someone speaks, quickly repeat their main point in your head. Example: In a team meeting, note what the person before you says and reference it when it’s your turn.

Cognitive Biases1 min read

How to Recall Better: - Test Yourself Often: After Reading, Close the Book and Write Down (Cognitive Biases)

To recall better: - Test yourself often: After reading, close the book and write down what you remember. - Use flashcards: Create questions for key points and quiz yourself regularly. - Rewrite, don’t reread: Summarize content in your own words instead of passively reviewing it. Example: If studying for an exam, write down key concepts from memory rather than rereading the textbook.

Cognitive Biases1 min read

How to When Planning for the Future: - Acknowledge Change: Remind Yourself,

When planning for the future: - Acknowledge change: Remind yourself, "I will grow and change in ways I can’t predict." - Set flexible goals: Make plans that can adapt to future versions of yourself. - Reflect on past growth: Look at how much you’ve changed in the last five years as proof that growth is constant. Example: Five years ago, you might have had different priorities. Imagine how today’s plans could evolve just as much.

Cognitive Biases20 min read

About the Brali Life OS Authors

MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.

Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.

Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.

Contact us