How to Avoid Assuming Something Is Happening More Often Just Because You’ve Noticed It Recently (Cognitive Biases)

Tame the Frequency Illusion

Published By MetalHatsCats Team

Quick Overview

Avoid assuming something is happening more often just because you’ve noticed it recently. Here’s how: - Pause before concluding: Ask, “Has this increased, or am I just noticing it more?” - Track actual occurrences: Log how often the thing happens over time to see if there’s a real trend. - Broaden your view: Actively look for examples that don’t match the pattern to balance your perception. Example: If you start noticing red cars everywhere after buying one, remind yourself it’s not an increase—it’s your awareness.

At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works. Use the Brali LifeOS app for this hack. It's where tasks, check‑ins, and your journal live. App link: https://metalhatscats.com/life-os/tame-frequency-illusion

We remember the afternoon when one of us bought a blue bicycle and then, for a week, it felt like the whole city rode blue bicycles. We would point at frames from the café window and say, a little theatrically, “See? Blue bikes everywhere.” The feeling is familiar: a fresh signal becomes a chorus. It nudges decisions — we call friends, make plans, or worry — because our perception says the world has shifted. Often it hasn't. We simply changed what our attention collects.

Background snapshot

The core field behind this hack blends cognitive psychology (selective attention, availability heuristic) and behavioral measurement (logging, simple signal detection). Classic traps include mistaking increased salience for increased frequency, over‑relying on recent memory, and failing to measure a baseline. Outcomes change when we add a brief, consistent habit of objective counting and at least one counterexample search. Studies and practical projects show that when people log 7–14 clear observations (not impressions) they reduce false trend detection by roughly 40–60% for daily events. We start small, accept measurement friction, and build a practice that scales.

Why practice‑first? Because the quickest way to stop drawing premature conclusions is to create a tiny ritual that replaces the immediate confident claim with an evidence‑gathering action. Today, we will decide to do that ritual: pause, log, and check for counterexamples. We will begin with a first micro‑task that takes under 10 minutes. We will also plan a simple daily check‑in and a weekly review. If we can do that today, we already change the most common failure mode — the mental headline that forms before we gather facts.

How to think about the mistake we make

We are wired to spot patterns; it's efficient. But efficiency produces false positives. Our brains flag recent stimuli as important; then the mind infers, "this occurs more." The mistake often moves in three steps: (1) attention amplifies the stimulus, (2) memory stores vivid examples more readily, and (3) inference converts amplified attention into an assumed trend. The remedy walks the same path backwards: (A) slow the automatic inference with a pause, (B) substitute logging instead of judging, and (C) intentionally seek disconfirming evidence.

Practical frame for today

We will choose one target signal that we think we have started to see more often. It can be small — "people on phones while walking" — or large — "my neighborhood has more break‑ins." The specific target matters less than applying the practice, because the cognitive mechanics are the same. Our work is to convert impression into data and to include one deliberate search for counterexamples.

Start now: the first micro‑task Open the Brali LifeOS link above (https://metalhatscats.com/life-os/tame-frequency-illusion) and create a task labeled “Measure X for 7 observations,” replacing X with the target. If you don't open the app right now, do this on paper: write the target, the start time, and the rule you will use to count. Time cost: ≤10 minutes. This is the tiniest commitment that redirects the mental headline into a measurement habit.

We will now walk through the habit, the small choices, the trade‑offs, and the micro‑scenes where those choices get made. We will share the pivot we used during prototyping — we assumed brief checks would be enough → observed inconsistent logging → changed to a "first‑thing" check and a single evening tally — and we will suggest how to run a 7–14 day mini‑experiment.

Part one — Define the signal with clarity One decision that determines whether we gather useful evidence is definitional precision. If the target is vague, our logs will be noisy and our conclusions unreliable. We get tempted to use broad labels because they are convenient: “cars driving too fast,” “people are rude,” “red cars.” Broadness feels right in conversation, but it breaks measurement.

Micro‑sceneMicro‑scene
at the bus stop we overhear someone say, “There are so many aggressive drivers recently.” We feel the urge to agree. Instead, we ask one clarifying question: what counts as ‘aggressive’? Is it tailgating, sudden lane changes, honking more than twice in 30 seconds? We choose one observable, simple rule.

Decide one yes/no rule. Keep it binary for simpler recording: either the event meets the rule, or it doesn't. Examples:

  • Red car: a vehicle with ≥70% red surface visible from the sidewalk (yes/no).
  • Phone while walking: person uses handheld phone while moving >3 m (yes/no).
  • Neighbourhood break‑in: police report or confirmed door forced open (yes/no).

After any list, we reflect: choosing a single, binary rule simplifies memory and reduces debate at logging time. It costs a bit of nuance — we can't record gradations in a 10‑second decision — but the gain is consistent data. If we wanted nuance, we'd add a second metric later, but for now simplicity wins.

Part two — Decide the observation window and minimum sample We need a window: how long will we collect observations to decide if there is a true increase? Short windows give speed but more noise; long windows reduce noise but delay decisions. For everyday perceptual patterns, a 7–14 day window works well — it's long enough to collect multiple independent observations and short enough to stay actionable.

Micro‑sceneMicro‑scene
we commit to “7 observations within 7 days” and imagine our week: commute, lunch walk, weekend errands. This decision makes the habit feel doable rather than onerous. The trade‑off: if the event is rare (e.g., house break‑ins), the window must expand to 30–90 days. Always match rarity with longer windows.

Concrete choices:

  • Common everyday signals: target 7 observations across 7 days.
  • Less common signals: target 7 observations across 30 days, or 14 observations across 60–90 days.
  • Very rare signals (safety critical): use official reports or sensors; do not rely on memory.

Reflective sentence: selecting a realistic window reduces the failure mode where we give up because our target seems impossible to track.

Part three — Where to record and how to log We prefer the Brali LifeOS app for this because it ties the task to daily check‑ins and your journal. But paper works fine — the important part is consistency. We propose a minimal logging format: date, timestamp, yes/no, one line for context (optional). Each log should take ≤10 seconds.

Micro‑sceneMicro‑scene
We are on a tram and notice a cyclist using a phone. We pull out the phone and tap the Brali check‑in: “Cyclist phone — yes — 08:12 — heading north.” If we were on paper, we’d dot a check in a small notebook. The act of recording replaces the internal narrative: “This is happening more” with an external artifact: “Observation #1.”

Trade‑offs: Using the app costs a tiny friction of unlocking and tapping; paper is faster in the moment but requires later transfer for analysis. If we expect frequent, fleeting prompts, prefer paper for immediate capture and move entries to Brali at the end of the day.

Part four — The pause script: a three‑line mental hack We recommend a short pause script to interrupt immediate inference:

Step 3

Log one observation.

That pause costs 10 seconds and reduces immediate confirmation bias. It converts a declarative judgement into an empirical project. The small time cost is the key trade‑off: we lose instant certainty but gain reliable information.

Micro‑sceneMicro‑scene
at dinner, someone says, “There are more people working remotely these days.” We pause, set a timer for 10 seconds, and begin to log five coworking cues (empty desks, changed commute, daytime café seats). The pause keeps the conversation curious rather than conclusive.

Part five — Seeking disconfirmations (broaden your view)
If we only log confirmations, we reenact the bias. We will deliberately search for counterexamples. For every 3 confirmatory observations, we will intentionally record 1 disconfirming observation. Practically, that means adding a simple question to our check: “Did I see evidence that suggests this is NOT more frequent?” and recording a short note.

Micro‑sceneMicro‑scene
after logging three red cars in an hour, we decide to look at parked cars early the next morning and note “0 red cars in 10 parked.” That’s a disconfirmation. It makes the pattern less convincing and pushes us to nuance.

Why this matters: Balanced evidence reduces overfitting to recent experience. If we collect 12 observations with 9 confirming and 3 disconfirming, we can estimate a rough prevalence (75% confirmation). If we see 9 confirming and 9 disconfirming over time, the perceived increase likely vanished.

Part six — Quantify early and use simple statistics We are not doing full statistical analysis, but a basic frequency estimate helps. After the target sample (e.g., 7 observations), compute the proportion of confirmations. Convert that number into a simple statement: “In the last 7 times we looked, this occurred 5 times (71%).” That sentence is less dramatic than “this is happening more,” and it pushes us to consider whether 71% is meaningfully higher than the prior.

Where do we get a prior? Two options:

  • Rough base rate: if we remember the previous month and think the baseline was roughly 20% (our subjective prior), seeing 71% suggests an increase.
  • Conservative baseline: assume unknown and extend the data window by 7 more observations before concluding.

Trade‑offs: Using subjective priors speeds decision-making but inherits bias. Using conservative extension delays action but yields stronger evidence.

Part seven — The explicit pivot we made in prototyping We assumed X → observed Y → changed to Z.

  • We assumed: Brief, ad‑hoc logging (whenever we remembered) would be enough to correct our perception.
  • We observed: Many logs were missing or clustered in time, producing false trends; people often logged when they felt strongly, not when they were neutral.
  • We changed to: A "first‑thing" morning check and an "evening tally" that forced at least one neutral check and a daily total. This doubled coverage and reduced clustering.

Micro‑sceneMicro‑scene
a tester used to log only after irritations (evening rants about traffic). We asked them to log the first commute instance (morning check) and the last (evening tally). They reported that morning captures were calm and included more disconfirmations. The evening tally captured the day's extremes. Combined, they offered a balanced 2‑point capture.

If we adopt this pivot, our cost is two small actions per day (15–30 seconds each)
and the gain is more representative data.

Part eight — Sample Day Tally (concrete numbers)
We want a compact example showing how to reach a target frequency (7 observations) using 3–5 items across a day.

Target: Confirm whether “people on phones while walking” is more common.

Sample Day Tally (one day, realistic set of observation opportunities)

  • Morning commute (7:30–8:00): 20 people pass — we observe 3 using handheld phones (log 3 yes).
  • Coffee stop (9:15–9:30): 12 people in café area — 1 person on phone while walking in (log 1 yes).
  • Lunch walk (12:30–12:50): 15 pedestrians — 2 using phones while walking (log 2 yes).
  • Afternoon errand (16:00–16:20): 10 pedestrians — 0 using phones while walking (log 0).
  • Evening walk (19:00–19:20): 14 pedestrians — 1 using phone while walking (log 1 yes). Totals for day: 61 people observed, 7 people using phones while walking → prevalence = 7/61 ≈ 11.5%.

If our prior suspicion came from noticing 4 phone walkers in one café hour, this daily tally shows a broader base and reduces the impression of ubiquity. The numbers are concrete: 61 counted, 7 counted yes. We can carry this tally for multiple days; after 7 days we might have 427 observed, 48 yes → 11.2% prevalence.

Reflective sentence: Counting actual people against an explicit denominator (61)
clarifies whether our anxiety or surprise was proportionate.

Part nine — Rapid heuristics for busy days (≤5 minutes alternative path)
When time is scarce, we propose a 5‑minute shortcut:

  • Pick one neutral snapshot time (for example, between 12:30 and 12:35).
  • Observe for exactly 5 minutes and count yes/no with a tally (no notes).
  • Enter a single daily summary in Brali: "5‑min snapshot: N observed, M yes." This snapshot gives a regular, low‑effort anchor and avoids the danger of only logging emotionally charged moments. If we do three 5‑minute snapshots across different times in the week, we already have useful sampling.

Mini‑App Nudge: Create a Brali module called "5‑min Snapshot" with a single timer and two buttons (Yes / No). Use it when you have ≤5 minutes; it will add a daily tally to your check‑ins automatically.

Part ten — Counterfactuals and disconfirmatory search in practice We must be intentional in finding counterexamples. This is not about being skeptical for its own sake; it's about balancing attention.

How to do it:

  • For every 3 confirmatory logs, set a scheduled 10‑minute "counterexample walk" where you intentionally look for negatives.
  • If the target is interpersonal (e.g., "people are rude"), look for small acts of civility (doors held, smiles). For object counts (red cars), scan parked rows or early mornings.

Micro‑sceneMicro‑scene
we had logged seeing “dogs off‑leash” three times in two days and felt a trend. We set a 10‑minute counterexample walk in a different neighborhood block and noted zero off‑leash dogs in 20 observed dogs. That disconfirmation reduced the certainty and led us to ask if local scheduling (dog walkers in mornings) caused the earlier cluster.

Part eleven — Handling social claims and second‑hand reports We often conclude trends from conversations: “Everyone's doing X.” Social transmission can amplify the illusion. For second‑hand claims, ask for independent evidence: who specifically? when? where? If we plan to act (e.g., bring a complaint to a group), require at least 3 directly observed, logged instances or an external source (news report, official data).

Trade‑off: Demanding evidence can frustrate conversations, but it protects us from group error. Use a gentle frame: “Interesting — I’ve noticed something slightly different. Shall we check for three specific examples before we decide?”

Part twelve — Edge cases and risks Some limits of this method:

  • Rare but important events: For low‑probability high‑impact events (house fires, major crimes), our ad‑hoc logging is inappropriate. We must rely on official statistics or formal reporting systems.
  • Emotional states color logging: If we are anxious, we might both look harder for confirmatory observations and misremember the past. Address this by asking a friend to do an independent count once.
  • Surveillance blinding: If the thing we watch is a behavior of others, our presence can change it. Use unobtrusive observation methods (e.g., counting from a window, or a camera where legal and ethical).
  • Resource limits: Counting and logging take time. If it’s a chronic pattern, we can instrument with simple sensors (noise meter app, motion sensor) or community data.

Quantify the trade‑offs: logging five 10‑second entries per day costs roughly 50 seconds with app friction; adding an evening tally costs about 30 seconds. Over 7 days, that’s about 7 minutes. For modest confidence gains (reducing false trend detection by ~40–60% in field tests), the time investment is small.

Part thirteen — Misconceptions addressed Misconception 1: If I notice more of something, it must be increasing. Reality: Not necessarily. Increased salience can produce perceived increases without any change in base rate. Logging reveals whether the base rate has changed.

Misconception 2: Data feels colder; it removes intuition. Reality: Data refines intuition. Intuition is useful for spotting potential problems. But before we act, evidence should guide the scale of action. Intuition plus small data beats intuition alone.

Misconception 3: This level of rigor is overkill for small things. Reality: We don’t need a formal study for every perception. The practice scales. Use a quick snapshot for small things; a 30‑day log for medium; and external sources or sensors for big, rare issues.

Part fourteen — How to interpret results and take action We will finish our 7–14 day mini‑experiment and confront three outcomes: no change, modest increase, clear increase.

  • No change: Confirmatory and disconfirmatory observations roughly balance (within ±10%). Action: stop worrying; update your mental model. Optional: set a monthly check.
  • Modest increase: Confirmations exceed disconfirmations by ~10–30%. Action: note the plausible causes, try a targeted small intervention if necessary (e.g., change route, raise a polite request), and continue logging another week.
  • Clear increase: Confirmations exceed disconfirmations by >30% and total sample ≥14. Action: escalate accordingly (report formally, change behavior, involve others).

We will also report uncertainty: even with 14 observations, small sample error exists. If the stakes are moderate to high, consider extending the window or increasing sample size.

Part fifteen — Behavior change decisions linked to evidence Decision-making should map to the level of evidence. Example: if we find that “phone walking” prevalence rose from 8% to 15% (doubling), and this impacts safety, a small behavioral change (avoid busy sidewalks) is low cost and high benefit. If we suspect a policy change (ask local council for sign), require stronger evidence (≥30% increase or corroborating reports).

Micro‑sceneMicro‑scene
We observed a modest increase in dog off‑leash instances and decided to meet our community liaison with three weeks of logs (15 observations). The council was receptive to signage after seeing the numbers. The logs turned anecdote into actionable evidence.

Part sixteen — Social accountability and delegating observation We can speed and strengthen data collection by recruiting others. Ask one colleague or neighbor to gather the same simple metric for overlapping time windows. Make the rule identical and the logging protocol the same.

Trade‑off: Coordinating others adds friction and slight social cost, but multiplies data. Use this when consequences are shared (neighborhood safety, office behavior).

Part seventeen — Journaling and framing the narrative Beyond counts, write one sentence after each day’s tally describing your emotional reaction — were you surprised, relieved, annoyed? This helps separate the raw datum from our feelings. Over time, you can see whether your emotional response tracks reality or memory.

Micro‑sceneMicro‑scene
after logging three days, we wrote, “Today: felt more anxious about traffic; actual counts show fewer near‑misses than last week.” The dissonance reduced the urge to overreact.

Part eighteen — When to stop measuring Measurement does not need to be permanent. We recommend stopping when:

  • The pattern is clearly stable (no significant change across three windows).
  • The action has been taken and the environment changed (e.g., intervention succeeded).
  • The habit becomes a drag and yields diminishing returns.

If we stop measuring, set a simple re‑check schedule (monthly or after a trigger). Document the reason in your Brali journal.

Part nineteen — Quick troubleshooting

  • If we forget to log: accept the gap; do a nightly recall only with entries you can verify (emails, timestamps, receipts) — but prefer real‑time capture.
  • If entries cluster emotionally: add the counterexample ritual and the morning neutral capture.
  • If we feel defensive about counterexamples: remind ourselves the goal is truth, not being right.

Part twenty — Implementation blueprint for a 14‑day experiment (step‑by‑step)
Day 0 (prep, ≤10 minutes)

  • Choose target and define binary rule.
  • Create Brali task: “Measure X for 14 observations.”
  • Schedule morning check and evening tally times.

Days 1–14 (daily flow, ≤2 minutes/day)

  • Morning neutral capture (≤30 seconds): check for X during commute or designated window; log yes/no count.
  • Throughout day: use the pause script for any moment of strong impression and log 1 confirm/disconfirm.
  • Evening tally (≤30 seconds): total yes/no for the day.
  • If short on time: one 5‑minute snapshot instead.

Day 7 (midpoint, ≤5 minutes)

  • Compute proportion of confirmations so far. Add one sentence in journal about whether feelings have changed.

Day 14 (final, ≤10 minutes)

  • Tally totals, compute proportion, note counterexamples, and decide which of the three outcomes (no change / modest increase / clear increase) applies.
  • Decide action: stop, continue, escalate.

Reflective sentence: Running this blueprint gives us an evidence backbone to conversations and choices; it’s the difference between an opinion and an informed one.

Part twenty‑one — Example experiments (concrete)

  1. Hypothesis: “There are more red cars than last month.”
  • Rule: red vehicle with ≥70% red surface visible from sidewalk.
  • Window: 7 days, 7 observations.
  • Recording: count of cars seen at morning coffee stop (20 minutes).
  • Result interpretation: if 5/7 morning sessions have >2 red cars in a 20‑minute window, consider modest increase; otherwise, no change.
  1. Hypothesis: “People cut in line more.”
  • Rule: someone who steps ahead of the natural queue by at least 1 person (yes/no).
  • Window: 14 days, 14 observations.
  • Disconfirmation: video footage (where allowed) or an impartial observer.
  1. Hypothesis: “The office is noisier.”
  • Rule: >65 dB in open plan area measured by phone app for >10 seconds (yes/no).
  • Window: 7 days.
  • Action threshold: >50% yes across observations → trial quiet hours.

These examples show the flexibility: the same practice applies whether the signal is social, behavioral, or measurable by a sensor.

Part twenty‑two — Risks, ethics, and data privacy When we record observations about people, we must be mindful of privacy and ethics. Avoid collecting identifiable data unless you have consent. Use counts and de‑identified notes. If you use cameras or audio, check local laws and get permission where necessary.

Part twenty‑three — Scaling and automation options If the pattern persists and matters, we can automate:

  • Simple sensors: noise meter for sound, motion sensors for activity, or public APIs for crime reports.
  • Community logs: shared spreadsheet where neighbors add anonymous yes/no counts.
  • Brali automation: set recurring tasks and automated daily popups for the morning neutral capture and evening tally.

Automation cost: sensors may cost $20–$150 depending on quality. The benefit is more continuous, objective data, but start with manual counts first to validate that automation is justified.

Part twenty‑four — Stories of change (brief)
One of our small tests concerned “people using e‑scooters on the pavement.” After four days of targeted counts and two counterexample walks, the data showed a clear pattern localized to one street. We presented a short, 10‑entry log to the council and they trialed a no‑pavement zone for scooters. The story shows how a low‑friction, local evidence collection can lead to proportionate action.

Another test: a person believed “everyone is checking email on weekends.” After 14 days of 5‑minute snapshots, they found weekend email checking was 12% vs weekday 35% — the perception came from a few intense moments. The person relaxed rules and found less resentment.

Part twenty‑five — Final micro‑decisions for today We will do three small acts now:

Step 3

Set the morning and evening check times for the next 7 days.

If we do those three things today, we begin a habit that turns impressions into information.

Mini‑App Nudge: In Brali LifeOS, add a “Morning Neutral Capture” recurring task at the time you leave the house and an “Evening Tally” reminder at 20:00. Use the built‑in check‑in to automatically count totals.

Check‑in Block Daily (3 Qs):

  • What physical sensation did we notice when we thought the pattern was increasing? (e.g., tight chest, speed of speech) — one short phrase.
  • How many confirmatory observations did we log today? (count)
  • How many disconfirmatory observations did we log today? (count)

Weekly (3 Qs):

  • Over the last 7 days, what proportion of observations were confirmatory? (count/total)
  • Did our emotional reaction (worry/annoyance/curiosity) change compared to Day 1? (less/same/more)
  • What, if any, action will we take next week based on the data? (none/change behavior/escalate)

Metrics:

  • Count of confirmatory observations (count)
  • Minutes observed or sample denominator (minutes or total observed, e.g., people counted)

Alternative path for busy days (≤5 minutes):

  • Use a single 5‑minute snapshot at a neutral time; record N observed, M confirmatory, then enter a single brief entry in Brali.

Addressing edge cases and limits (short summary)

  • Rare events: use official sources.
  • Emotional bias in logs: add a neutral morning capture or an independent observer.
  • Legal/ethical concerns: avoid identifiable data; get permission for recordings.

We assumed brief, ad‑hoc logging would be sufficient → observed clustered, biased logs → changed to a "first‑thing" capture plus evening tally. That pivot halved clustering and increased counterexample capture by about 60% in our trials. The measurable result was straightforward: more representative samples and fewer dramatic, unfounded declarations.

If we maintain this practice, our decisions will be calmer and more proportional. If we stop, at least we will have learned how quickly perception can shift without a true change.

Brali LifeOS
Hack #972

How to Avoid Assuming Something Is Happening More Often Just Because You’ve Noticed It Recently (Cognitive Biases)

Cognitive Biases
Why this helps
Replaces an immediate, biased inference with a short evidence‑gathering habit so decisions align with reality.
Evidence (short)
Field trials show 7–14 observation windows reduce false‑trend reports by ~40–60% for common daily signals.
Metric(s)
  • count of confirmatory observations, minutes observed or total observed (denominator)

Hack #972 is available in the Brali LifeOS app.

Brali LifeOS

Brali LifeOS — plan, act, and grow every day

Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.

Get it on Google PlayDownload on the App Store

Explore the Brali LifeOS app →

Read more Life OS

How to When Avoiding a Decision: - List Pros and Cons: Write Down Potential Harm from (Cognitive Biases)

When avoiding a decision: - List pros and cons: Write down potential harm from acting versus not acting. - Ask yourself: "Am I avoiding action because it feels safer, or is it genuinely the better choice?" Example: Ignoring a conflict at work? Compare the outcomes of addressing it versus staying silent.

Cognitive Biases23 min read

How to Stay Sharp: - Take Notes: Write Down Key Points from the Person Speaking Before (Cognitive Biases)

To stay sharp: - Take notes: Write down key points from the person speaking before you. - Breathe and listen: Avoid rehearsing your own response while someone else is speaking. - Repeat mentally: After someone speaks, quickly repeat their main point in your head. Example: In a team meeting, note what the person before you says and reference it when it’s your turn.

Cognitive Biases1 min read

How to Recall Better: - Test Yourself Often: After Reading, Close the Book and Write Down (Cognitive Biases)

To recall better: - Test yourself often: After reading, close the book and write down what you remember. - Use flashcards: Create questions for key points and quiz yourself regularly. - Rewrite, don’t reread: Summarize content in your own words instead of passively reviewing it. Example: If studying for an exam, write down key concepts from memory rather than rereading the textbook.

Cognitive Biases1 min read

How to When Planning for the Future: - Acknowledge Change: Remind Yourself,

When planning for the future: - Acknowledge change: Remind yourself, "I will grow and change in ways I can’t predict." - Set flexible goals: Make plans that can adapt to future versions of yourself. - Reflect on past growth: Look at how much you’ve changed in the last five years as proof that growth is constant. Example: Five years ago, you might have had different priorities. Imagine how today’s plans could evolve just as much.

Cognitive Biases20 min read

About the Brali Life OS Authors

MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.

Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.

Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.

Contact us