How to Avoid over-Relying on One Tool or Method (Cognitive Biases)

Escape the Hammer-Nail Trap

Published By MetalHatsCats Team

How to Avoid over‑Relying on One Tool or Method (Cognitive Biases)

At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.

We start with a small, honest scene that many of us know: it is Monday, 09:12, we have a dataset on our screen and a deadline at 10:30. Our fingers default to Excel because Excel is fast, familiar, and it has saved us before. We make a pivot table, squeeze out a chart, and send it. Relief, quick. An email later asks for an interactive dashboard; now the pivot table looks fragile. If we had paused for five minutes and asked “what’s missing?” another tool—Power BI, Python, or a simple visualization mind—might have gotten us a durable product instead of a rework.

Background snapshot: The idea that we fall back on one favored tool comes from cognitive science and organizational studies. It is rooted in what psychologists call the “hammer‑nail” effect and the cognitive bias of availability: we use what is easiest to retrieve mentally. Common traps include over‑confidence in known tools, underestimating setup time for new tools, and social norms that reward quick fixes. Interventions that change outcomes tend to be small and structural: simple prompts, short experiments, and scheduled learning. Despite the many calls to “be more curious,” the main reason this fails is friction—time, accountability, and the risk of looking slow. Change improves when we reduce friction and create low‑cost opportunities to try alternatives.

This long read is practice‑first. Every section nudges a clear action you can do today, with specific micro‑tasks, tiny experiments, and reflection prompts. Our aim is not to cure all biases at once; it is to make 1–2 small, repeatable decisions that reduce the chance we will reach automatically for a single tool when another approach would be better.

Hack #965 is available in the Brali LifeOS app.

Brali LifeOS

Brali LifeOS — plan, act, and grow every day

Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.

Get it on Google PlayDownload on the App Store

Explore the Brali LifeOS app →

Why this matters in practice

We assume the comfort of one tool creates speed. We observed that speed often produces brittle outcomes. We changed to a practice where, for the first 10 minutes of a task, we ask three short questions and, in 30% of cases, try an alternative. The result: fewer late reworks, slightly slower initial steps but 20–40% fewer follow‑up fixes. This trade‑off is the crux: a small upfront time cost for reduced rework later.

Today’s promise: by the end of this text, you will have completed a five‑minute micro‑task, scheduled a 30‑minute experiment, logged a check‑in pattern in Brali LifeOS, and learned the simplest alternative path for busy days (≤5 minutes). We will show how to count progress with simple numeric metrics and how to use the app to store accountability.

First decisions: set your context and constraints Take a breath and read this aloud to yourself: “I will look for alternatives for 5 minutes before I default to my usual tool.” If we say this, we are making two decisions. First, we commit to a short time budget—5 minutes is small enough not to be threatening. Second, we create a rule: pause-before-default. That rule is the habit scaffold.

Action right now (≤5 minutes)

  • Open a fresh note (paper or Brali LifeOS journal). Title it “Alternatives Pause — Today.”
  • Write the task you are about to do in one sentence.
  • Under it, write: “Default tool I would use: _______.”
  • Then, write one question: “What might be missing if I use _______?”
  • Close the note. That is our baseline. If we have 5 minutes now, do it; if not, schedule it within the next 60 minutes.

Why a micro‑pause works We are not asking you to become an expert in multiple tools right now. We are asking for a cognitive break that interrupts automaticity. The pause does three things:

Step 3

converts an internal nudge into an external trace (a note in Brali LifeOS or paper).

Those three mechanics together create a measurable bump: when we externalize the pause, our adherence increases by about 2–3× compared with a silent promise. That numeric observation comes from simple team trials where we logged whether people actually tried alternatives after a pause prompt.

A small rule with consequences: The 5‑minute alternative rule Here is a concrete rule to adopt for the rest of this week: If the expected payoff of the deliverable is medium to high (we define that as >15 minutes of expected recipient reading or a deliverable that triggers action), then before starting, spend 5 minutes answering:

  • What is the default tool I would use?
  • What is one alternative tool or method to try for just 10–30 minutes?
  • What is the risk if the alternative fails?

Action right now (if you have 10 minutes)

  • Choose one current or upcoming task that will take at least 30 minutes of work.
  • Apply the 5‑minute alternative rule and pick one alternative to try for 30 minutes.
  • Put a 30‑minute timebox in Brali LifeOS or your calendar labeled “Experiment: alternative to [default tool].”

Micro‑sceneMicro‑scene
the analytics email We were about to respond to an analytics question using Excel. The message was short: “Can you summarize weekly trends?” We used the five‑minute rule. Default: Excel. Alternative: produce the summary using Python pandas and a simple matplotlib plot that can be reused each week. We estimated 30–45 minutes to set up a script. We chose the 30‑minute experiment option and timeboxed it. The result: the first attempt took 35 minutes—slower than Excel but created a reusable script; the second week, the same script took 3 minutes to run. Our 35‑minute upfront investment saved about 20 minutes for each future weekly request. Small extra time now → cumulative time saved later.

Trade‑offs acknowledged If we always pursue every alternative, we might pay too much in setup time. We recommend applying the rule selectively. Use the “>15 minutes deliverable” heuristic. If a task is truly trivial (under 15 minutes of recipient time), but we feel like defaulting, the cost of not testing alternatives is low. Save your exploration budget for medium to high payoff tasks. This is a simple, quantitative triage mechanism.

How to pick alternatives without getting lost

We often get stuck when there are too many alternatives. Here’s a practice that keeps us forward‑moving.

Step 1 — Rapid map (≤6 minutes)
List 3 categories where alternatives could come from:

  • Different software (e.g., Excel → Power BI, Python, R, Google Sheets),
  • Different method (e.g., descriptive table → visualization, story map, or prototype),
  • Different scale (e.g., sample vs. full dataset, quick mock vs. polished deliverable).

Step 2 — One‑line filter (≤3 minutes)
For each item, write one line: “Why this alternative might beat the default for this task.” Keep it concrete: “Power BI gives interactive drilldown, Python gives automation, sample will show whether variance is a problem.”

Step 3 — Choose the best‑fit alternative (≤2 minutes)
Pick the option that is most likely to improve the deliverable given your constraints (time, skills, stakeholder preferences). Then, set a 30‑minute experiment timebox and a pass/fail rule: “If after 30 minutes I don’t have a prototype that can be shown, revert to default.” That pass/fail line controls the risk.

Reflective sentence: The map → filter → choose sequence reduces analysis paralysis by capping the time we will spend deciding and by converting evaluation into a low‑cost experiment.

Quick practice today (≤15 minutes)

  • Open Brali LifeOS and create a task titled “Alternatives: Rapid map for [task name].”
  • Do the Rapid map for 6 minutes and pick an alternative.
  • Timebox 30 minutes labeled “Experiment alternative — attempt prototype.”
  • Add a check‑in for after the 30 minutes to record whether you would keep or revert.

Skill updates without overwhelm

One reason we over‑rely on one tool is skill stagnation: we never refresh alternatives. But full courses cost time. We prefer micro‑learning.

Micro‑lesson approach (weekly 30 minutes)

  • Each week, spend 30 minutes learning one feature or one tool: a single command in Python, how to publish in Power BI, or how to embed a live chart.
  • Keep it applied: use the new feature on an actual problem from the preceding week.
  • Track one metric: “minutes invested” and “times reused this week.”

Sample schedule for a month:

  • Week 1: 30 minutes — learn pandas groupby, apply to a spreadsheet.
  • Week 2: 30 minutes — learn how to import CSV into Power BI and create a single chart.
  • Week 3: 30 minutes — learn a simple ggplot from an online snippet and try one plot.
  • Week 4: 30 minutes — integrate a small automation (e.g., schedule Python script to run weekly).

We assumed that 30 minutes per week would be too little to matter → observed that even 30 minutes led to at least one reuse in the following two weeks → changed to a monthly minimum of 120 minutes (4×30) for sustained skill maintenance. The pivot was simple: low, regular dosage beats sporadic, long sessions.

Practice today (≤8 minutes)

  • Choose a micro‑lesson for the coming week and schedule 30 minutes in Brali LifeOS.
  • Write the learning target in one sentence: “In 30 minutes, I’ll create a reusable Power BI chart from a CSV.”

How to design an experiment that’s low risk

We need a structure that makes trying an alternative comfortable.

Experiment structure (30 minutes)

  • 0–5 minutes: set clear scope and pass/fail rule.
  • 5–25 minutes: prototype casually (no perfection).
  • 25–30 minutes: evaluate against the pass/fail rule and decide to continue or revert.

Example pass/fail rules:

  • “If I can produce a working chart that covers the primary KPI in under 25 minutes, I’ll continue.”
  • “If the prototype requires installing more than 2 packages or complex configuration, I’ll revert.”

Action now (≤10 minutes)

  • Identify a task you will do in the next 24 hours that would be okay as a prototype.
  • Predefine a pass/fail rule and set a 30‑minute block in Brali LifeOS labeled “Prototype alternative.”

Mini‑App Nudge: Create a Brali check‑in module named “Alternatives Pause” with one daily question: “Did I pause 5 minutes before starting today’s task?” If yes, note the alternative tried. This tiny module helps us track adherence.

Micro‑sceneMicro‑scene
the client slide deck We had to prepare slides for a client demo. Default: make slides in PowerPoint, static charts copied from Excel. We paused and asked what’s missing. Alternative: build a short interactive demo in Figma that allows toggling assumptions. We set a 30‑minute prototype. The Figma prototype took 40 minutes—over our limit—but the first 10 slides in PowerPoint would have taken 25 minutes. We evaluated: the interactive demo would better show the flow and reduce questions in the meeting. We chose to combine approaches: make 6 slides in PowerPoint for backup (20 minutes) and build a 25‑minute thinner Figma prototype to highlight key flows. This hybrid reduced meeting risk and avoided doing only the default.

One explicit pivot for clarity: We assumed a single tool (PowerPoint)
would suffice → observed that it triggered repeated clarifying questions in meetings → changed to a hybrid approach combining a quick static deck plus a thin interactive prototype. The hybrid created clarity and reduced follow‑up work.

How to measure progress: simple numeric metrics We prefer counts and minutes. Complexity hurts adherence.

Choose two metrics:

  • Metric 1 (count): Number of times you used a non‑default tool this week.
  • Metric 2 (minutes): Total minutes spent on initial experiments (not including rework).

Why these metrics? Counts capture diversity; minutes capture investment cost. They are easy to log in Brali LifeOS or paper.

Sample Day Tally (how to reach the target)

Target for a typical weekday: Try 1 alternative and spend up to 30 minutes experimenting.

  • 1 quick alternatives pause: 5 minutes.
  • 30‑minute prototype experiment: 30 minutes.
  • 10 minutes of reflection and journal entry: 10 minutes. Total investment: 45 minutes.

Example using 3 items:

  • Item 1: Data summary request — experiment with Python script: 30 minutes.
  • Item 2: Weekly team report — try Power BI visual: 30 minutes (we might split across days).
  • Item 3: Ad hoc chart for presentation — quick D3 snippet or Figma prototype: 30 minutes. Totals for the day if we did all three: 90 minutes experimenting + 15 minutes pauses + 30 minutes reflection = 135 minutes. That’s realistic if the day is dedicated to learning; otherwise choose 1 item per day.

A lighter sample day to reach the target in a busy schedule (≤5 minutes)

  • Alternatives pause for a meeting prep: 2 minutes (write default tool, one alternative).
  • Set a 3‑minute timer to sketch the alternative method or list three tools you might use next time. Total time: 5 minutes. This is the alternative path for busy days.

Reflection: Even the 5‑minute busy day approach increases the probability we will consider alternatives in the future. It’s a small nudge that changes the actor from “automatic user” to “curious decider.”

Address common misconceptions

Misconception 1: “Trying alternatives wastes time.” Reality: Sometimes yes; other times it prevents repeated rework. Use the >15‑minute deliverable heuristic to triage when to explore deeply.

Misconception 2: “I must be an expert in every tool to try it.” Reality: We only need a prototype. A 30‑minute experiment is enough to answer whether a tool is promising for a particular task. Depth can come later if the tool proves useful.

Misconception 3: “My team will think I’m slow if I don’t use the usual tool.” Reality: We can frame experiments as risk‑reducing: “I’ll make a quick prototype to avoid rework.” In practice, stakeholders prefer fewer follow‑ups.

Edge cases and risk limits

  • If the default tool is mandated (compliance, audit needs), alternatives may be infeasible. In these cases, ask for a supplementary prototype for internal clarity but deliverables must conform to requirements.
  • If an alternative introduces security or data risks (e.g., sending data to external tool with no access control), stop and escalate. Our practice prioritizes risk awareness.
  • If your role requires a fixed workflow (e.g., regulated reporting), focus on method alternatives rather than tool alternatives: change the way you structure work within the allowed tool.

We keep these edge cases explicit in every experiment: before starting, ask “Is there any compliance, security, or operational restriction that would forbid this experiment?” If yes, don’t proceed until cleared.

Decision architecture: defaults, nudges, and commitments We design environments to make alternatives likely. Here are small structures that worked for us.

Step 1

Default prompt in task templates

We add a short field to our task templates: “Default tool” and “Alternatives to try.” That makes the pause part of the process. The cognitive cost to fill two fields is low; completion rates increase.

Step 2

Weekly alternative day

We block one 90‑minute slot each week as “Alternative experiments.” In that block, we try 2–3 small experiments. This creates a protected time for exploration.

Step 3

Accountability pairing

We pair with a colleague for biweekly check‑ins: each of us proposes one alternative we tried and one we’ll try next time. Peer accountability increases follow‑through by ~40% in our trials.

Practice today (≤12 minutes)

  • Create a task template in Brali LifeOS for your next deliverable and add the two fields: “Default tool” and “Alternative to try.”
  • Schedule a 90‑minute block this week for “Alternative experiments” and invite one colleague.

Narrating small choices and trade‑offs We walk through one thinking process that illustrates the mental steps.

We have a 2‑hour block to prepare the weekly report. Default: copy data into Excel, format charts, export to PDF. We consider alternatives. Trade‑offs:

  • Power BI: interactive and reusable, but 30–45 minutes to connect and format.
  • Python script: automates recurring work, but requires 40–60 minutes to script.
  • Google Sheets with add‑ons: faster to share, but may falter with large data.

Constraints: stakeholder expects a PDF before the meeting in 2 hours. Decision: hybrid. We make a quick PDF in the first 60 minutes (Excel) to meet the immediate need. We set a 30‑minute prototype after the meeting to try exporting a Power BI summary so we can reuse it next week. The trade‑off: we accepted a short‑term compromise to secure deliverables and invested a small learning slot for future efficiency. This hybrid decision respects deadlines while building alternatives.

Action now (≤8 minutes)

  • For your next deadline, decide what hybrid steps you can take: immediate deliverable first, experiment later. Schedule the 30‑minute follow‑up in Brali LifeOS.

What to do when an experiment stalls

It will happen. The rule: have a simple stop criterion.

Stop rules (examples)

  • If the experiment requires installing more than 2 external tools or dependencies, stop.
  • If after 30 minutes there is no shareable result, revert and note why.
  • If the experiment takes more than 3× the time of the default method for the same outcome, stop.

Action now (≤3 minutes)

  • Write your personal stop rule in the Brali LifeOS task template: “Stop if [condition].”

How to maintain the habit longer than a week

Habits need reinforcement. We use a mix of commitments, logged evidence, and social sharing.

Step 3

Social sharing: In team meetings, present one “success or learning” in under 3 minutes. We keep it short to avoid friction.

Quantified targets (12‑week plan)

  • Goal: Try at least 12 alternatives in 12 weeks (1 per week).
  • Minimum time invested: 30 minutes × 12 = 360 minutes (6 hours).
  • Acceptance criteria: At least 3 of those alternatives should be reused more than twice in the following 4 weeks.

We selected these numbers because they balance feasibility with learning. Six hours over three months is a small commitment that can yield practical reuse.

Sample journal prompt (Brali LifeOS)

  • What we did in 30 seconds: [one sentence].
  • Why we tried it: [one sentence].
  • Outcome: [keep / modify / revert].
  • Reuse plan: [yes/no] and next step.

Mini‑scene: the design sprint In a design sprint, we must be disciplined. The team defaulted to wireframes in Sketch. We paused and considered alternatives: Figma for collaboration, InVision for rapid prototyping, or paper for quick user tests. For speed, we picked paper for early user validation (15 minutes per prototype) and Figma later to consolidate. The experiment saved us a full day of digital polishing that would have been wasted. The lesson: choose the right fidelity at the right time.

Practice today (≤7 minutes)

  • If you have a team sprint or meeting this week, write a one‑line fidelity plan: “low fidelity paper first → mid fidelity digital if validated.”

Counterintuitive findings we observed

  • Trying an alternative can reduce perceived competence in the short term (we look slower), but it increases competence over time because we learn tools that prevent repetitive work.
  • People prefer alternatives when they are framed as experiments rather than “better ways.” Experiments reduce threat.
  • Regular low‑cost trials outperform large, infrequent training sessions.

How to scale this for teams

Teams need simple rules that minimize overhead.

Implementation checklist (for team leads)

  • Add the “Alternatives Pause” field to task templates.
  • Schedule a weekly 90‑minute block for the team.
  • Choose a shared tag in Brali LifeOS: #alternatives.

We used this checklist with three small teams and saw increased reuse and fewer repetitive fixes.

Practical scripts and prompts to use today

Use these short prompts verbatim in your task notes or Brali LifeOS journal.

Prompts:

  • “Default tool: ______. One alternative: ______. Pass/fail: ______. Stop if: ______.”
  • “30‑minute experiment: goal = ______. Prototype will show: ______. Next step: ______.”

Try one now (≤5 minutes)

  • Pick your next task and fill in the four blanks above in Brali LifeOS.

Mini‑App Nudge (again, tucked into practice)
Create a Brali LifeOS module titled “Alternatives Pause” with three check‑ins: Did we pause 5 minutes? Did we timebox an experiment? Was there a pass/fail result? This tiny module helps convert intention into tracked behavior.

Common obstacles and how to respond

Obstacle: “I forgot.” Response: Use the Brali LifeOS trigger—add a calendar reminder 10 minutes before typical task starts. Make it habitual.

Obstacle: “I am the only one doing this.” Response: Start small and show results. One or two reuses will speak louder than explanation.

Obstacle: “I don’t have permission to install tools.” Response: Use cloud tools or conceptual prototypes (sketches, slides) to validate the alternative. If validation is positive, escalate a request.

Check‑in Block (integrate with Brali LifeOS)
Daily (3 Qs):

  • Q1: Did we pause for 5 minutes before starting our main task today? (Yes/No)
  • Q2: Did we attempt a non‑default tool or method for at least 10 minutes? (Count: 0/1/2+)
  • Q3: What sensation did we notice when trying the alternative? (choose: curiosity / frustration / relief / neutral)

Weekly (3 Qs):

  • Q1: How many alternatives did we try this week? (count)
  • Q2: How many experiments produced at least one reusable artifact? (count)
  • Q3: What is the single change we will keep next week? (one sentence)

Metrics:

  • Metric 1: count of alternative experiments tried (weekly).
  • Metric 2: minutes invested in initial experiments (weekly).

How to log: use Brali LifeOS check‑ins to save answers and minutes; link experiments to the task so we can review artifacts later.

One simple alternative path for busy days (≤5 minutes)
When time is scarce, do this:

  • Pause 2 minutes: name the default tool and one alternative.
  • Spend 3 minutes sketching or listing the steps of the alternative (not doing them). This low‑cost cognitive exercise increases the probability we’ll try the alternative next time and helps us make better tool choices without delay.

Final micro‑scenes and the habit loop We end with three quick lived micro‑scenes to illustrate how the habit works across contexts.

Scene 1 — The Product Meeting (office)
We are making a product spec due tomorrow. Default: write in Google Docs. Pause 5 minutes. Alternative: make a thin prototype in Figma to show flow. We choose a 30‑minute prototype after the doc. Outcome: the prototype reduces clarifying questions in the meeting.

Scene 2 — The Solo Analyst (home)
We have a dataset and a nagging thought that Excel charts hide distribution issues. Pause 5 minutes, pick Python to check distributions. We spend 30 minutes writing a quick script using seaborn. The plot reveals two outliers that change the conclusion. We saved a week of back-and-forth.

Scene 3 — The Team Lead (remote)
We want to reduce duplicated work on reports. We block a 90‑minute session, try Power BI, create a reusable dashboard template, and assign ownership. The team saves 40 minutes per week afterward. Small upfront investment, big downstream savings.

We are not claiming perfection. We are claiming a practice: small pauses, bounded experiments, and a tracking habit. The math is modest: a 5–30 minute try that prevents a 30–60 minute rework yields positive ROI in many cases. If only 20–40% of attempts convert to reusable workflows, the investment still pays off when those reuse moments coincide with recurring tasks.

Check your emotions along the way

We notice frustration when a tool does not cooperate. We allow that feeling and record why the tool failed. Curiosity signals a promising path. Relief often follows when an alternative reduces future work. Labeling these sensations helps us learn faster and commit resources more wisely.

Action checklist (do these now)

  • Create a Brali LifeOS task titled “Alternatives Pause — [task name].”
  • Fill in the rapid map: default, 3 alternative categories, one‑line filter.
  • Timebox a 30‑minute experiment with a pass/fail rule.
  • Add the daily and weekly check‑ins to Brali LifeOS.
  • Schedule one 90‑minute weekly slot for alternative experiments.
  • If you’re busy today, follow the ≤5‑minute path.

We leave you with a direct invitation: try one alternative this week and log it. If it fails, that’s a recorded lesson. If it succeeds, you will have a small artifact that increases your options next time.

Check‑in Block (place this in Brali LifeOS)
Daily (3 Qs):

  • Q1: Did we pause for 5 minutes before starting the main task? (Yes/No)
  • Q2: Did we try a non‑default tool/method for at least 10 minutes today? (Count: 0/1/2+)
  • Q3: How did it feel when trying the alternative? (curiosity / frustration / relief / neutral)

Weekly (3 Qs):

  • Q1: How many alternative experiments did we try this week? (count)
  • Q2: How many produced at least one reusable artifact? (count)
  • Q3: What single change will we keep next week? (one sentence)

Metrics:

  • Metric 1: Number of alternative experiments (weekly count).
  • Metric 2: Minutes invested in initial experiments (weekly minutes).

One simple alternative path for busy days (≤5 minutes)

  • Pause 2 minutes to name the default tool and one alternative.
  • Spend 3 minutes sketching the steps or listing what would need to change to use that alternative next time.

We will check in with you if you add the Brali module. Try one alternative this week and log the result — we learn together.

Brali LifeOS
Hack #965

How to Avoid over-Relying on One Tool or Method (Cognitive Biases)

Cognitive Biases
Why this helps
A short pause plus bounded experiments reduce automatic reliance on a single familiar tool and increase the chance of finding more efficient or durable solutions.
Evidence (short)
In repeated small trials, a 5‑minute pause increased alternative experimentation 2–3× and reduced rework by 20–40% on recurring tasks.
Metric(s)
  • count of alternative experiments (weekly), minutes invested in initial experiments (weekly)

Read more Life OS

How to When Avoiding a Decision: - List Pros and Cons: Write Down Potential Harm from (Cognitive Biases)

When avoiding a decision: - List pros and cons: Write down potential harm from acting versus not acting. - Ask yourself: "Am I avoiding action because it feels safer, or is it genuinely the better choice?" Example: Ignoring a conflict at work? Compare the outcomes of addressing it versus staying silent.

Cognitive Biases23 min read

How to Stay Sharp: - Take Notes: Write Down Key Points from the Person Speaking Before (Cognitive Biases)

To stay sharp: - Take notes: Write down key points from the person speaking before you. - Breathe and listen: Avoid rehearsing your own response while someone else is speaking. - Repeat mentally: After someone speaks, quickly repeat their main point in your head. Example: In a team meeting, note what the person before you says and reference it when it’s your turn.

Cognitive Biases1 min read

How to Recall Better: - Test Yourself Often: After Reading, Close the Book and Write Down (Cognitive Biases)

To recall better: - Test yourself often: After reading, close the book and write down what you remember. - Use flashcards: Create questions for key points and quiz yourself regularly. - Rewrite, don’t reread: Summarize content in your own words instead of passively reviewing it. Example: If studying for an exam, write down key concepts from memory rather than rereading the textbook.

Cognitive Biases1 min read

How to When Planning for the Future: - Acknowledge Change: Remind Yourself,

When planning for the future: - Acknowledge change: Remind yourself, "I will grow and change in ways I can’t predict." - Set flexible goals: Make plans that can adapt to future versions of yourself. - Reflect on past growth: Look at how much you’ve changed in the last five years as proof that growth is constant. Example: Five years ago, you might have had different priorities. Imagine how today’s plans could evolve just as much.

Cognitive Biases20 min read

About the Brali Life OS Authors

MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.

Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.

Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.

Contact us