How to When Reviewing Past Events: - Ask Yourself:
Recognize ‘I Knew It All Along’ Thinking
How to When Reviewing Past Events: Ask “Did I Really Know This Beforehand, or Is It Hindsight Talking?”
At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works. Practice anchor:
We start with a small, honest sentence: hindsight is deceptively persuasive. When a project succeeds or fails, our brains rearrange memory and meaning so that outcomes look inevitable. If we want to learn reliably from the past, we must slow down and inspect what we actually knew before the event, not what makes sense after the outcome. This is a practice we can do today; a habit we can track. The goal is not to spare ourselves from responsibility but to make our learning sharper by separating prediction from explanation.
Hack #1038 is available in the Brali LifeOS app.

Brali LifeOS — plan, act, and grow every day
Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.
Background snapshot
The study of hindsight bias—sometimes called the “I‑knew‑it‑all‑along” effect—began in experimental psychology in the 1970s. Researchers found that, after learning an outcome, people tend to overestimate the predictability of that outcome. Common traps include rewriting memory of prior uncertainty, overstating the clarity of earlier signals, and underweighting the role of chance. This often fails because we skip the step of recording what we believed ahead of time; we rely on memory, which is malleable. What changes outcomes is a small procedural fix: capture pre‑event judgments, then compare them later. That simple discipline reduces overconfidence and improves calibration by measurable amounts (studies typically show calibration improvements in the 10–30% range when predictions are recorded).
Why this helps in practice: it slows us, forces concrete numbers or probabilities, and gives us a clean comparison point for feedback. The rest of this long read shows how to do it today, every day, with mini‑decisions, micro‑scenes, and the Brali LifeOS habit loop.
We’ll walk together through the habit as if we were practicing it in a noisy office, on a small project, and in quiet reflection at night. We will write predictions, use simple scales, and treat the habit as an experiment we can refine. We assume one workweek and one personal decision as our starting fields of practice.
Part 1 — Why record predictions and how to start right now We are busy people. When something goes wrong, our mind reaches for a tidy story: “I saw the warning signs.” When something goes right, we tell ourselves we planned it. That tidy story feels good. It costs accuracy.
A practical start takes 5–10 minutes and two tools: your phone (or a notebook)
and one short prompt. Today, pick one decision you expect to face: a meeting outcome, a coding sprint delivery, or whether a friend will reply to a message. Before the meeting or action, stop for 60 seconds and write:
- The exact prediction in a sentence (e.g., “The team will miss the milestone by 3–5 days.”).
- A probability between 0–100% for that outcome (e.g., “70% chance we miss by 3–5 days”).
- One short reason why (2–3 bullets, symptoms not narrative).
If we commit to this once today, we already improve calibration. The immediate benefit is that we can be kinder to ourselves in review—less “I should have known”—and clearer about what signals mattered.
Practice now (3–5 minutes)
We’ll do this together in a micro‑scene: it’s Monday morning, coffee warm, calendar open. A sprint planning meeting is at 10:00. We open Brali LifeOS (or our notebook) and create a task: “Pre‑meeting prediction — sprint deliverables.” We type the prediction line, the probability, and three short reasons. We set a check‑in to review after the meeting. Then we breathe and go to the meeting. The time commitment before the event is 3 minutes; the value is a later, less biased learning moment.
Why numbers matter
Probabilities are harder than yes/no answers, but they pin down conviction. Saying “we’ll probably miss” is soft. Saying “70% chance” is measurable. If we consistently assign probabilities, we can track calibration over weeks. If we never use numbers, we cannot measure error. Research shows that even simple probability sliders produce more accurate self‑assessments—accuracy improves roughly 10–20% relative to qualitative judgments.
Part 2 — The template we actually use (and why it’s short)
We tried long checklists. We assumed a long form would produce deeper reflection → observed that people skipped it → changed to a tight 3‑field template. Keep it short so we use it.
The working template (≤2 minutes to fill)
Three quick reasons (each 5–10 words).
We can add optional context tags: “work”, “friendship”, “finance”, “health”. That’s it. This is the starting record we will reference when outcomes arrive.
Micro‑sceneMicro‑scene
a failed pitch
We pitched a client at 2 pm. At 1:45 we wrote: “Client will say ‘no’ today.” 60% confidence. Reasons: tight budget; recent leadership turnover; unclear value demo. At 3:30 they said yes. We captured surprise. Instead of saying “we knew this,” we looked back: one reason (unclear demo) was false; budget constraints were real but offset by new priorities; leadership noise did not matter. That allowed us to spot which lever actually moved the outcome—the demo framing—so we could replicate it.
Part 3 — How to structure the review so it teaches When the outcome appears, we open the original prediction and compare. This is where the learning happens. The review has three concrete moves:
- Compare prediction vs. outcome numerically (e.g., predicted 70% miss, actual: missed by 4 days).
- Score our calibration: were we underconfident, overconfident, or well calibrated? A simple rule: if we assigned 70% probability, the event should occur about 7 times out of 10 in similar situations. For single events, we ask whether our judgment matched what actually happened (binary), but aggregation over weeks gives the true number.
- Trace which reasons held up and which did not. Mark each reason “true”, “partly true”, or “false”.
We can do this in 2–5 minutes. That is a big economy: in under five minutes, we turn an emotional afterthought into a learning signal.
Micro‑sceneMicro‑scene
the daily standup check
We set a daily end‑of‑day habit. After work we open the Brali LifeOS task for that morning’s prediction. We mark the three reasons. We find one unexpected factor—an external vendor delay—that none of our reasons included. We mark that and set one micro‑task for tomorrow: “Add vendor check to timeline.” The learning loop closes quickly.
Part 4 — Quantify the habit: numbers we can track today Habits need measurable targets. For this practice we track two simple metrics:
- Count: number of pre‑event predictions recorded per week.
- Calibration error (optional): difference between predicted probability and actual outcome tally across events.
Target for beginners (30 days)
- 3 predictions per workweek (≈12 per month).
- After 4 weeks, compute simple calibration: if you predicted events with 70+% 10 times and 7 of them occurred, that’s perfect for that bin.
Sample Day Tally (how to reach the target using 3–5 items)
- Morning sprint prediction (1 minute): 1 prediction, 60% confidence.
- Midday client reply prediction (30 seconds): 1 prediction, 40% confidence.
- Evening personal message reply prediction (30 seconds): 1 prediction, 20% confidence. Totals for the day: 3 predictions, average time per prediction ≈ 1 minute, daily time ≈ 2 minutes. Monthly projection: 3 × 20 workdays = 60 predictions (more than enough data to start measuring calibration after 2–3 weeks).
Part 5 — Brali LifeOS integration: tasks, check‑ins, and journaling
We use three Brali components:
- Task: “Make prediction before X event” with a 1‑minute template in the task body.
- Check‑in: scheduled at the predicted outcome time or end‑of‑day to review the prediction.
- Journal: for a 1‑paragraph reflection once a week to synthesize patterns.
Mini‑App Nudge Set a Brali micro‑module: “Daily prediction—3 items.” It prompts at 9:00, 13:00, and 18:00 with the 3‑field template and a one‑click review when the day ends.
Part 6 — The trade‑offs and why we still do it There are immediate trade‑offs: time and discomfort. Recording predictions takes minutes and admitting uncertainty is uncomfortable. There is also a cost of attention: we spend mental bandwidth expressing doubt rather than acting. The trade‑off pays off because we get sharper feedback that changes future decisions.
Quantifying payoff: if we make a small decision change that saves one meeting (30 minutes)
or prevents one error per month, that’s already a strong return. If our calibration improves by 10% over a quarter, our forecasts become more actionable and we make fewer costly bets.
Part 7 — Common misconceptions and edge cases Misconception 1: “I’ll just rely on memory; I don’t need to write predictions.” Memory is reconstructive and biased; we will systematically overstate what we knew.
Misconception 2: “This is only for big decisions.” It’s more useful for small, frequent choices because aggregation matters. We learn via many small signals.
Misconception 3: “Numbers feel fake.” A numeric probability is not a promise; it’s a snapshot of our current belief. It is an instrument for learning.
Edge case: rare events. When outcomes are rare (e.g., a major product failure once every few years), we cannot expect good calibration fast. For those, we need scenario planning and explicit frequency priors, not just single predictions.
Risk/limits: this method does not remove responsibility. If we forecast low risk and fail to mitigate, we still must act. Also, overfocusing on calibration can lead to paralysis if we obsess over probabilities rather than decisions. Use this as a learning tool, not as a procrastination device.
Part 8 — A week‑by‑week practice plan we can start today Week 1: Capture baseline. Make 3 short predictions each workday. Use the 3‑field template. Do not judge yourself—just capture.
Week 2: Review and tag reasons. At the end of each day, mark which reasons were true/false and add one micro‑task to improve signal next time.
Week 3: Aggregate and measure. Count how often high‑confidence predictions occurred. Do a weekly journal entry (5 minutes) summarizing patterns.
Week 4: Adjust decision rules. Based on week 3, adjust your decision threshold. For example, if you are repeatedly overconfident at 80% in certain types of tasks, lower your default to 60% or create a mitigation policy.
Each week’s work is concrete and short: 2–10 minutes per day, total ≈ 15–40 minutes per week.
Micro‑sceneMicro‑scene
the pivot
We assumed a long structured review form would make us wiser → observed that it increased friction and led to drop‑off → changed to the short 3‑field template with daily push reminders in Brali. The pivot made the practice sustainable without losing signal quality.
Part 9 — How to rate reasons cleanly so the feedback is informative We need a compact code for reasons:
- True (T): The reason was actually present and materially affected outcome.
- Partly true (P): The reason existed but was not decisive.
- False (F): The reason did not occur or had no effect.
When we mark reasons, add a one‑word note: “magnitude” (high/med/low). Over weeks we will see which reasons are high‑magnitude predictors.
Example entry
Prediction: “The campaign will underperform on CTR.” Confidence: 75%. Reasons: poor headline (T, high), weak audience targeting (P, med), budget constraints (F). Review: CTR underperformed by 25%. The headline was the major driver. We add a micro‑task: “A/B test 3 new headlines before next launch” and reduce our baseline confidence for next launch by 10%.
Part 10 — Shortcuts for busy days (≤5 minutes)
If we are running between meetings and cannot do the full template, use this ≤5‑minute alternative:
- Quick form (≤2 minutes):
One reason in 5 words.
- Quick review (≤3 minutes):
One micro‑task for tomorrow.
This keeps the habit alive on busy days.
Part 11 — Examples across domains We show concrete examples so the template maps onto actual choices.
Work — product deadline Prediction: “Feature A will be ready by Friday.” Confidence: 55%. Reasons: dev bandwidth low (T, high), unclear specs (P, med), no blocking dependencies (F). Review: Not ready; blocked by legacy bug. We mark dev bandwidth as T and create micro‑task: “Daily 15‑minute bug triage.”
Finance — small investment decision Prediction: “This stock will rise 10% in 3 months.” Confidence: 30%. Reasons: sector momentum (P), valuation cheap (P), company earnings reliable (F). Review: Stock down 5%. The earnings shock made a difference; we mark company earnings as F and change decision rule: avoid high leverage on companies with low earnings visibility.
Relationships — message reply Prediction: “She’ll reply by tonight.” Confidence: 40%. Reasons: typically responsive (P), busy week (P), time zones (T). Review: No reply. We do not interpret this as personal rejection; documentation shows time zones mattered. Action: do not escalate.
Health — habit adherence Prediction: “I will do a 20‑minute run this evening.” Confidence: 65%. Reasons: scheduled run time (P), cold weather forecast (F), motivation usually holds after work (P). Review: Skipped. Weather changed. Mark reason weather F, adjust plan: “Indoor 15‑min treadmill if cold.”
Each example shows a small time cost and concrete micro‑task after the outcome.
Part 12 — Aggregation and measuring calibration After a few weeks we have a dataset: predictions with probabilities and outcomes. How do we measure calibration simply?
Bucket method (quick)
- Put predictions into 10% bins (0–10, 11–20, … 91–100).
- Count how often events in each bin occurred.
- Compare observed frequency to predicted midpoint. For example, for all 70% predictions, did outcomes happen ≈70% of the time?
Quick numerical rule of thumb
- If your observed frequencies deviate by 10–20 percentage points from predicted, you have room to improve.
- Aim for calibration error under 15 percentage points across bins after 8–12 weeks.
We do not need perfect Bayesian belief. We need better than random.
Part 13 — How to avoid three common errors in the habit Error 1: Retro‑writing. We rewrite our pre‑event certainty after the fact. Fix: lock the original prediction (timestamp it) and keep the review separate.
Error 2: Only recording successes. We are tempted to preserve proud forecasts and delete failures. Fix: adopt a policy: record every pre‑event prediction you commit to. If it was not recorded, treat it as missing data—not a skill failure.
Error 3: Overfitting from small samples. We may change our whole decision framework after one surprising event. Fix: require at least 5–10 similar cases before adjusting global rules; for big changes, do scenario checks.
Part 14 — The emotional side: humility, relief, and frustration We train ourselves to be less self‑flattering. There is relief in admitting uncertainty—we release pressure to be right every time. There is also frustration when patterns show we are overconfident. Both emotions are useful. Humility helps us seek more data; frustration can be converted into curiosity: “Which signals did I miss?”
Micro‑sceneMicro‑scene
night journaling
At 9 pm we open the week’s prediction log. We see three times in a row we were 80% confident about a vendor timeline and were wrong twice. A small shiver of irritation—then we write a note: “Vendor lead times systematically underreported; require two‑week buffer.” That single policy reduces future surprise and frees us from rehashing blame.
Part 15 — When this method will disappoint you This practice gives noisy early returns. If you expect perfect calibration in a week, you will be disappointed. Rare events and single large decisions will not converge quickly. Also, if we record predictions but never change behavior based on them, the practice is sterile. The method works when we iterate: predict, observe, adjust.
Part 16 — Learning signals to prioritize Not all feedback is equal. We should weight reasons by predicted magnitude and by the specificity of the signal. Examples:
- Specific operational signals (e.g., “server latency above 200ms”) are high signal.
- Vague feelings (“I had a gut”) are lower signal but still useful if documented.
Each reason should be tagged with expected magnitude (high/med/low). Over time we will learn which categories of reason have predictive power.
Part 17 — How teams can scale the habit We assume teams will be messy at first. One practical pattern:
- Before each sprint planning, the team records 3 predictions for the sprint (delivery on time, bug count under X, user sign‑ups > Y).
- After the sprint, the team reviews and assigns reason marks. A short 10‑minute retro focuses on what reasons were true and what failed.
- If the team records 12 predictions per month, they will have enough data in 6–8 weeks to adjust planning buffers.
This reduces blame during retros and focuses conversation on mechanisms. The trade‑off is initial overhead; the payoff is clearer causal attribution.
Part 18 — Tools beyond Brali and why timestamping matters If Brali is unavailable, use a simple timestamped note in your phone or an email to yourself. Email to self is useful because it creates an external record that cannot be easily altered. The timestamped record prevents retroactive edits and helps us trust our data.
Why timestamp? Because hindsight bias is not just sloppy memory; it’s motivated editing. A timestamped prediction is a commitment device that surfaces true prior beliefs.
Part 19 — Sample logs (realistic and short)
We provide three condensed entries we might have in Brali.
Entry 1 (work)
- Prediction: “Release 1.2 will be accepted by QA Friday.” Confidence: 60%.
- Reasons: code freeze in place (P, med); several flaky tests (T, high); no major dependencies (F).
- Outcome: Accepted Monday (2‑day delay).
- Reason checks: flaky tests (T), dependency emerged (P).
- Micro‑task: “Add extra CI job for flaky tests.”
Entry 2 (finance)
- Prediction: “ETF will outperform cash by 2% in 3 months.” Confidence: 55%.
- Reasons: market tailwinds (P), low fees (P), inflation risk (T).
- Outcome: Underperformed 1.5%.
- Reason checks: inflation (T), market tailwinds weaker (F).
- Micro‑task: “Reweight portfolio hedge.”
Entry 3 (relationships)
- Prediction: “Colleague will take initiative on the doc.” Confidence: 30%.
- Reasons: busy schedule (T), low past initiative (T), our clear request (F).
- Outcome: They did not act.
- Reason checks: busy schedule (T), low initiative (T).
- Micro‑task: “Assign a specific owner; set calendar reminder.”
Part 20 — Weekly synthesis ritual (10–20 minutes)
Once a week, take 10–20 minutes for synthesis.
- Open that week’s predictions (10–30 entries).
- Compute count and a basic calibration estimate: for predictions above 70%, how many happened?
- Look for repeated “true” reasons (high signal).
- Write a 1‑paragraph takeaway and one policy change.
This ritual keeps the habit alive and produces a small set of decision rules we can implement.
Part 21 — Using probabilities in decisions (practical rule)
Probabilities change decisions in two ways:
- Risk thresholding: set action thresholds. For example, if we estimate >80% chance of missing a deadline, trigger contingency plan A (add one extra developer) and communicate expectations to stakeholders.
- Effort allocation: higher confidence in risk justifies higher mitigation effort.
If we consistently miscalibrate, tune the thresholds. For instance, if our 80% predictions are only 60% actual, treat their weight as 60% until calibration improves.
Part 22 — When to escalate a prediction to a plan Not every prediction requires action. Use this simple rule:
- If predicted bad outcome probability >50% and expected negative impact > moderate (e.g., time loss > 4 hours, cost > $200, reputational effect felt by 2+ stakeholders), create a mitigation micro‑task within 24 hours.
This reduces overreaction while ensuring significant risks are addressed.
Part 23 — Combining this with scenario planning For complex, high‑impact events, add a short scenario step: list 2–3 plausible outcomes and assign probabilities that sum to 100% (or leave probability for uncertainty). This is slightly longer but pays off when stakes are high. For daily practice we keep to single‑outcome predictions.
Part 24 — Calibration over months: expected learning curve Expect slow gains. Calibration typically improves modestly in the first month (say 5–10% better), then more gradually. Aim for steady improvement rather than quick perfection. If after 3 months there is no detectable improvement, audit the process: are predictions being recorded honestly? Are reviews happening? Are reasons specific?
Part 25 — Cognitive scaffolding to reduce bias We can further reduce hindsight bias by using these scaffolds:
- Ask disconfirming questions before making a prediction: “What would disprove my belief?”
- Force alternative outcomes: write the most plausible reason the opposite could happen.
- Create pre‑mortems: imagine failure and record reasons before the event.
Each scaffold increases time but improves the quality of reasons.
Part 26 — Habit maintenance: accountability and incentives The best way to stick to this habit is a small accountability structure:
- Pair up with one colleague or friend. Share one prediction each day and do a single joint review weekly.
- Or set a streak in Brali LifeOS: record predictions 5 days in a row to earn a personal reward.
We prefer small, meaningful rewards—coffee treat, 30 minutes of uninterrupted reading—rather than gamified vanity metrics.
Part 27 — When we get emotionally hit: reframing mistakes If a repeated pattern shows consistent error, we may feel badly. Reframe mistakes as data. “We were overconfident here” is a technical observation, not a personal indictment. Treat it like adjusting a thermostat—tighten or loosen convictions based on feedback.
Part 28 — Examples of policy changes after learning Realistic policy shifts that come from this practice:
- Add 10% time buffer to all external dependency tasks.
- Require an explicit readiness checklist for demos.
- Default to inviting a decision owner for any cross‑team deliverable.
- Create a rule: “If predicted downside >$500, escalate to manager.”
Each policy is a small, implementable change that reduces repeat errors.
Part 29 — Mini‑experiments to test signal validity If a reason frequently shows as “true”, test it. For example, if headline wording consistently affects CTR, run a controlled A/B test for 2 weeks (10,000 impressions) to quantify effect size. The habit of prediction points us to which micro‑experiments to run; the experiments provide stronger evidence.
Part 30 — Accountability check: what we measure and why We track:
- Count of recorded predictions per week (usage metric).
- One numeric calibration metric (difference in points for a key probability bin). These are simple, interpretable, and actionable.
Mini‑App Nudge (again, inside the narrative)
We might set a Brali check‑in that asks: “Did you record at least 3 predictions today?” It’s a micro‑nudge that keeps the practice front of mind. Use it for 30 days and then reduce frequency.
Part 31 — Addressing stubborn bias: when hindsight is social In teams, hindsight bias can become a social defense. People frame events to protect reputations. The pre‑registered predictions and public logs reduce social rewriting. Make it a cultural norm: celebrate accurate predictions and, more importantly, exactly document uncertainty. This requires leadership modeling.
Part 32 — Edge cases: high volatility and black swans For very high‑volatility environments, this method still helps but requires wider bins. Use wider probability categories and focus on scenario fatigue: record multiple plausible outcomes and then track which holds. For black swan events—by definition rare—use scenario planning and stress tests rather than single predictions.
Part 33 — Quick checklist before an important decision (≤3 minutes)
If predicted negative outcome >50% and impact > moderate, make contingency plan.
Part 34 — What success looks like after three months After three months we expect:
- 60–120 recorded predictions.
- Observed calibration improvement of 5–20 percentage points in key bins.
- 3 small policy changes implemented (buffers, checklists, owner assignment).
- A clearer habit: predictions become the default step before key meetings.
Part 35 — Troubleshooting: if we stop, why it happens and how to restart Reasons we stop:
- It feels trivial at first.
- The time cost feels like busywork.
- We don’t see immediate benefit.
Restart steps:
- Reduce to the ≤5‑minute shortcut for one week.
- Pair with a colleague for accountability.
- Do a single weekly synthesis every Sunday for 4 weeks to see patterns.
Part 36 — Final micro‑scene: a calm retrospective It’s Friday evening. We have 15 predictions from the week. We open Brali LifeOS, run the weekly synthesis, and notice that our 80% predictions had a 60% realization rate. We feel a sting—then relief. We adjust one policy: require one extra test for any item we assign above 75% confidence. That policy saved two hours next week and prevented a late‑night bug fix.
Check‑in Block Daily (3 Qs):
- What was the prediction you recorded today? (one line)
- How confident did you feel (0–100%)? (numeric)
- Which reason was most decisive? (one short phrase)
Weekly (3 Qs):
- How many predictions did we record this week? (count)
- For predictions above 70% this week, how many occurred? (count)
- What is one policy change we will implement next week? (one sentence)
Metrics:
- Count of predictions logged per week (count).
- Calibration gap for the 70–80% bin (percentage points).
One simple alternative path for busy days (≤5 minutes):
- Use the quick form:
- One sentence prediction.
- Binary confidence: likely/unlikely/uncertain.
- One reason (5 words).
- Do a one‑line morning or evening review: outcome yes/no; one micro‑task.
We assumed a detailed form would make learning richer → observed drop‑off in use → changed to the short template and micro‑nudges. That explicit pivot shifted the practice from theory to habitual action.
We end with a small invitation: let us treat prediction as a daily instrument, not a moral test. If we make this tiny commitment—three short predictions today—we will, in weeks, have clearer feedback and fewer “I should have known” moments. We will also have a pragmatic archive of what we actually believed, so future judgment is an act of learning rather than of flattering memory.

How to When Reviewing Past Events: - Ask Yourself: "did I Really Know This Beforehand, or (Cognitive Biases)
- Count of predictions per week (count)
- Calibration gap in 70–80% bin (percentage points).
Read more Life OS
How to When Avoiding a Decision: - List Pros and Cons: Write Down Potential Harm from (Cognitive Biases)
When avoiding a decision: - List pros and cons: Write down potential harm from acting versus not acting. - Ask yourself: "Am I avoiding action because it feels safer, or is it genuinely the better choice?" Example: Ignoring a conflict at work? Compare the outcomes of addressing it versus staying silent.
How to Stay Sharp: - Take Notes: Write Down Key Points from the Person Speaking Before (Cognitive Biases)
To stay sharp: - Take notes: Write down key points from the person speaking before you. - Breathe and listen: Avoid rehearsing your own response while someone else is speaking. - Repeat mentally: After someone speaks, quickly repeat their main point in your head. Example: In a team meeting, note what the person before you says and reference it when it’s your turn.
How to Recall Better: - Test Yourself Often: After Reading, Close the Book and Write Down (Cognitive Biases)
To recall better: - Test yourself often: After reading, close the book and write down what you remember. - Use flashcards: Create questions for key points and quiz yourself regularly. - Rewrite, don’t reread: Summarize content in your own words instead of passively reviewing it. Example: If studying for an exam, write down key concepts from memory rather than rereading the textbook.
How to When Planning for the Future: - Acknowledge Change: Remind Yourself,
When planning for the future: - Acknowledge change: Remind yourself, "I will grow and change in ways I can’t predict." - Set flexible goals: Make plans that can adapt to future versions of yourself. - Reflect on past growth: Look at how much you’ve changed in the last five years as proof that growth is constant. Example: Five years ago, you might have had different priorities. Imagine how today’s plans could evolve just as much.
About the Brali Life OS Authors
MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.
Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.
Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.