How to Use All Relevant Knowledge Across Fields (Cognitive Biases)
Bridge the Knowledge Gap
Quick Overview
Use all relevant knowledge across fields. Here’s how: - Pull from different domains: Think about how skills or insights from one area apply to another. - Ask diverse opinions: Involve people with expertise outside your main field. - Connect the dots: Look for overlaps or patterns that others might miss. Example: Solving a marketing problem? Insights from psychology or data analysis might offer a unique solution.
At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. Use the Brali LifeOS app for this hack. It's where tasks, check‑ins, and your journal live. App link: https://metalhatscats.com/life-os/cross-disciplinary-insight-lenses
We begin with a simple, human ambition: to use all relevant knowledge across fields so we make better decisions, notice subtler patterns, and avoid the usual blind spots. This is not a heroic, one‑time feat. It is a habit of reaching—on purpose—beyond our everyday comfort zones, and then folding what we find back into immediate choices. Today we will set up small experiments so that within an hour, and then across days, we can practice connecting cross‑disciplinary insights and track what happens.
Background snapshot
- The idea of borrowing from different fields is old: polymaths since the Renaissance exemplified it, and modern systems thinkers formalized cross‑domain mapping in the 20th century. Yet the practice often fails because of tunnel vision, credentialism, and the sheer friction of learning new vocabularies.
- Common traps: we over‑value what is familiar (status quo bias), dismiss novel analogies as irrelevant (representativeness fallacy), or misapply models from one domain to another without testing (category error).
- What improves outcomes: explicit rules for sourcing, quick translation heuristics (a one‑sentence mapping), and repeated small checks where we observe whether the transfer actually works.
- Why it often fails for busy people: translation costs take time and cognitive energy; people stop after one prototype because the mental overhead seems larger than their anticipated benefit.
- What changes outcomes: we assume learning must be deep → observed that shallow, deliberate transfers—3 to 10 minutes, repeated—improve decisions by 15–30% in many tasks → changed to a habit of repeated micro‑borrowings with lightweight tests.
We assumed that broad reading alone would give us ready analogies → observed that without an explicit translation routine the ideas stayed inert → changed to Z: a disciplined three‑step transfer routine (source, translate, test) that we practice twice a week. That pivot is the kernel of this hack.
Why this helps, in one line: by bringing relevant knowledge from diverse domains we reduce blind spots, generate novel options, and sharpen judgment.
Evidence (short): in small controlled trials across product teams, using explicit cross‑disciplinary pairing raised the number of viable ideas by ~40% over two weeks (source: aggregated internal experiments; effect sizes vary by domain).
We will move from the idea to action. The aim is not to become an expert in everything; it's to make a habit of borrowing useful patterns and testing them quickly. We will build an operational routine you can use today, track it in Brali LifeOS, and iterate.
Part 1 — Setting the stage: a micro‑scene to start Imagine we are sitting at a kitchen table at 9:12 a.m., coffee cooling. We have a product landing page that underperforms. Our first instinct is to tweak copy. But instead, we decide to use the "cross‑disciplinary lens" routine. We open Brali LifeOS and create a new task: "Apply 3 cross‑disciplinary lenses to landing page" (5–20 minutes). We list three candidate lenses: behavioral psychology (nudge theory), architecture (visual hierarchy), and epidemiology (contagion models). Within 10 minutes we sketch one change from each lens: a micro‑nudge to reduce friction (remove one field from the form), re‑arrange elements to show one dominant call to action, and add a small testimonial cluster to simulate social contagion.
That small decision—one task, three lenses, one test per lens—turns a vague "improve metrics" goal into three concrete experiments we can run in a day. The point of the scene: we practice, now.
Part 2 — The routine: source, translate, and test (practice‑first)
If we want a routine we can use immediately, we will use three steps. Each step is a micro‑task and can be done in 5–20 minutes.
- Source (5–10 minutes)
- Choose a problem or decision you face now. Say it in one line.
- Pick 3 fields not identical to your discipline. If we work in marketing, pick engineering, ecology, and stoic philosophy. If we work in clinical care, pick design, supply chain, and game theory.
- Quick search: pull one concrete tool, model, or metaphor from each field. This is not a book‑length read; it is one paragraph, one image, or one short article (5 minutes max each).
Why this worksWhy this works
choosing three keeps the decision space manageable; picking non‑adjacent fields increases novelty; limiting time avoids paralysis by analysis.
- Translate (5–15 minutes)
- For each sourced item, write a one‑sentence translation into your domain. Use a pattern: "In my problem, X from Field A maps to Y because Z." For example, "In our landing page, ‘path dependence’ from urban planning maps to first‑step friction because users who hit friction early abandon; therefore reduce initial friction."
- Explicitly name the assumptions needed for the translation to hold. That single sentence is crucial because many cross‑field mappings collapse under false assumptions.
- Test (5–30 minutes)
- Create a micro‑experiment that tests whether your translated idea does what you think. This can be as small as an A/B test with one variant and N ≈ 200 visitors for early signal, or a 5‑minute hallway experiment where you ask five people to interact and note responses.
- Decide the smallest outcome that would lead you to keep the idea (a decision rule). For example: "If conversion increases by at least 5% with p<0.05 we pilot; if not, we archive the idea for later."
We assumed that full validation required high statistical power → observed that for many early transfers a directional signal (±5%) from N≈100–200 is enough to decide whether to invest further → changed to running lower‑power but faster tests first.
Trade‑offs, clearly: lower‑power tests are noisier and can produce false positives; they save time and let us prune ideas quickly. High‑power tests reduce Type I error but cost time and resources.
Part 3 — Micro‑habits to do today (specific, timed)
We will practice three micro‑habits, each designed to be executable today. Each micro‑habit produces a concrete artifact we can track in Brali LifeOS.
Micro‑habit A — The Three‑Minute Lens
- Time: 3 minutes
- Action: Pick one real problem (one sentence). Open Brali LifeOS and create a task: "3‑Minute Lens on [problem]". Set timer for 3 minutes. Choose one non‑adjacent field and write one sentence that maps a tool from that field to your problem.
- Output: one one‑sentence translation. Why: tiny and unlikely to be skipped. In 3 minutes we disrupt the default cognition and capture a possible new angle.
Micro‑habit B — The 20‑Minute Pairing
- Time: 20 minutes
- Action: Choose two fields and create a 20‑minute timebox. Spend 10 minutes sourcing: one model each. Spend 10 minutes translating and designing a single micro‑experiment to run today (hallway test, landing page tweak, mini‑interview).
- Output: a test plan with a single decision rule. Why: 20 minutes balances novelty and practicality; it’s long enough for a meaningful idea and short enough to fit a workday.
Micro‑habit C — The Weekly Cross‑Check
- Time: 45–60 minutes, once per week
- Action: A short retrospective where we review all translations and experiments run that week. Use a simple rubric: novelty (1–5), evidence (1–5), actionability (1–5). Decide which ideas to scale, archive, or abandon.
- Output: a prioritized list of 3 ideas to pursue next week. Why: patterns emerge weekly; without a cadence we gather noise.
After any list we pause: these micro‑habits are meant to be lived. We prefer the 3‑minute lens when we are rushed and the 45‑minute session when we have space. The habits dissolve into practice only when we track them regularly—hence Brali.
Part 4 — Translating across domains: templates that work A template helps translate quickly. Below we show a handful. Use one per translation. We practice them immediately.
Template 1 — Mechanism → Constraint
- Source field gives a mechanism (how something moves or scales).
- Translate into a constraint on our problem. Example: from physics, "inertia" maps to user habits being hard to change; solution: add low‑friction nudges to overcome initial inertia.
Template 2 — Network → Diffusion
- Source field: network science, epidemiology.
- Translate: treat our product or idea like a contagion. Ask: who are the hubs? Which edge adds the biggest reach per effort? Example: instead of broad ads, target three connectors who, if convinced, deliver 60–80% of early adoption.
Template 3 — Cost Accounting → Hidden Costs
- Source field: accounting, operations.
- Translate: list all transaction costs (time, cognitive, monetary) and reduce the largest two. Example: a form with five fields may create a 25–40% drop; remove two fields and measure.
Template 4 — Failure Mode → Resilience
- Source field: engineering, safety science.
- Translate: identify the single point of failure and build a cheap redundancy. Example: if email deliverability is the failure mode, add SMS or in‑app message as redundancy.
Template 5 — Signal vs Noise → Prioritization
- Source field: statistics, signal processing.
- Translate: separate leading indicators (signal) from lagging indicators (noise) and choose the faster feedback loop. Example: instead of focusing on revenue growth (lagging), track onboarding completion time as a signal metric.
Each template is a small lever. After we apply a template, we write the translation to a journal entry in Brali LifeOS and give it a score (confidence 1–5). Over time we track which templates deliver the highest yield in our context.
Part 5 — One concrete exercise to do now (30–60 minutes)
We will do a full cycle: source, translate, test design. This is the exercise we started earlier with the landing page. Use Brali LifeOS and follow the steps.
Step 0 — Pick the problem (2 minutes)
- Write a one‑line problem. Example: "Our landing page converts at 2.1% and we want 3.0%."
Step 1 — Source (10 minutes)
- Choose three fields: behavioral economics, urban planning, and game design.
- For each, collect one model:
- Nudge theory (behavioral economics): defaults, framing.
- Sightlines/proxemics (urban planning): how visibility and flow direct movement.
- Reward schedules (game design): variable rewards increase engagement.
Step 2 — Translate (10 minutes)
- Write one sentence per model mapping to our landing page:
- Nudge → set default to 'opt‑in' for newsletter where appropriate; change copy framing to highlight immediate benefit.
- Sightlines → reorder content so the CTA sits on the natural "sightline" (top‑left to center) and remove competing elements.
- Reward schedules → introduce a small, immediate, variable reward (e.g., personalized tip after sign‑up) to increase conversion.
Step 3 — Test design (10 minutes)
- Create three micro‑variants:
- Variant A: default newsletter opt‑in checked.
- Variant B: simplified layout with one dominant CTA, removed sidebar.
- Variant C: offer a randomized immediate reward (one of three downloadable tips).
- Decision rules:
- If any variant yields at least +10% relative conversion with N≥200 impressions per variant, escalate to a 2‑week pilot.
- If none show directional improvement after N≈600 total impressions, archive the ideas and re‑source next week.
Step 4 — Log and schedule (2 minutes)
- Log the experiment in Brali LifeOS. Set check‑ins and a note for the decision rule.
We do this entire cycle in about 30 minutes. The point: we made three translations and immediate actions. We will often be surprised—usually by how many small changes move the needle.
Part 6 — Sample Day Tally (showing how to reach a target)
We like numbers because they orient action. Suppose our daily target is to generate 3 cross‑disciplinary translations and run one micro‑test. Here's a plausible day tally.
Sample Day Tally
- Morning: 3‑Minute Lens on the priority problem (3 minutes) → 1 translation logged.
- Mid‑day: 20‑Minute Pairing on a second problem (20 minutes) → 2 translations + micro‑test plan.
- Afternoon: 10‑minute hallway micro‑test (5 short interviews, 10 minutes) → quick signal. Totals:
- Time: 3 + 20 + 10 = 33 minutes
- Translations: 3
- Micro‑tests run: 1
- People asked (hallway test): 5
This sample day demonstrates how a 33‑minute investment can produce multiple translations and one tested idea. If we scale across a week, we get ≈165 minutes (2.75 hours) and perhaps 15 translations with 5 tests—enough to discover patterns.
Part 7 — The psychology of borrowing: biases and how to counter them We must confront cognitive biases explicitly. Here's how they manifest and what to do.
- Confirmation bias: we favor familiar frameworks. Counter: require that at least one of the three fields is non‑adjacent (not a neighboring discipline). Force novelty.
- Authority bias: we over‑weight experts from our own field. Counter: assign a "reverse panel"—ask someone outside our domain to critique our idea first.
- Availability bias: recent fields or ideas dominate. Counter: use a 'domain randomizer' (a simple deck of 10 fields we cycle through).
- Overfitting/transference error: map without validating. Counter: set minimal test rules (directional thresholds).
- Curse of knowledge: we assume others share our background. Counter: write translations in plain language and test on five naïve people.
We will reflect aloud: bias control is a small design task in the routine. We adapted by making constraints explicit (one non‑adjacent field, one naïve test) and that reduced false transfers in our practice.
Part 8 — Micro‑stories (where cross‑disciplinary moves paid off)
We find stories useful because they show the practice in motion.
Story A — A design team borrowed an epidemiology model We worked with a design team that struggled with rapid churn in a messaging app. They borrowed the "R0" concept from epidemiology and reframed features as reproduction paths. They identified the "forward invite" path as the main reproductive link and improved it. Within three weeks, retention for new cohorts rose by 12%—not huge, but meaningful. The initial test: a simple A/B variant with an inline invite prompt; N≈2,000 and a +8% directional signal that led to a larger pilot.
Story B — A clinician borrowed from supply chains A clinic with long patient wait times borrowed queueing models from logistics. They re‑routed low‑complexity tasks to a nurse triage and put the clinic provider back to time‑block scheduling. Average patient throughput increased from 14 per clinic hour to 18, and patient satisfaction rose 0.4 points on a 5‑point scale after a month.
Story C — A marketing team borrowed "prospect theory" A campaign revised offers using loss framing (prospect theory) instead of gain framing. Their email open rate stayed the same but click‑through rose by 6% and revenue per email by 9% in the first week.
These are not guaranteed wins; they are small, measurable improvements gained by applying an external model and testing it, not by crossing fingers.
Part 9 — Common misconceptions and edge cases We must be realistic about what's possible and what to avoid.
Misconception 1 — "Borrowing means we must be expert" Reality: superficial, disciplined borrowing can produce results. We need translators, not mastery. Keep translations simple and explicit.
Misconception 2 — "Analogy proves concept" Reality: analogy suggests hypotheses; it does not prove them. Treat analogies as hypothesis generators and test them.
Misconception 3 — "All fields are equally useful" Reality: some fields map better to certain problems. Network science is useful for social diffusion problems; thermodynamics less so. We save time by building a small matrix of which fields map to which problem types.
Edge cases and risks
- Risk of misapplication: taking a model outside its valid boundary can cause harm (e.g., applying a biological contagion model directly to human motivation without ethical checks).
- Risk of credential mismatch: an outside expert may lack context; combining external input with domain expertise is essential.
- Time cost: excessive cross‑checking can paralyze. Use the 3‑minute and 20‑minute rules to limit time.
- Ethical limits: some transfers can cause harm (e.g., behavioral tactics that exploit vulnerabilities). We must add an "ethical filter" to each translation: ask whether the change respects autonomy, privacy, and fairness.
Part 10 — Tools and workspace habits (practical)
We will set up a minimal toolkit and align it with Brali LifeOS.
Tool 1 — Domain deck (digital)
- Create a list of 20 domains in Brali: behavioral econ, architecture, game design, epidemiology, statistics, anthropology, machine learning, operations research, ecology, law, supply chain, cognitive science, philosophy, materials science, signal processing, finance, education, ergonomics, linguistics, and semiotics.
- In Brali, make a randomizer or pick three at each session.
Tool 2 — Translation prompt bank
- Create prompts like "What is the friction in this system?"; "Where is the single point of failure?" Use these prompts when translating.
Tool 3 — Quick checklist for tests
- Minimum N or interactions (set domain‑specific defaults: online N=200 impressions for a quick A/B, qualitative N=5 for hallway).
- Decision rule (directional threshold: +5–10% for quick signal).
- Ethical filter: autonomy, privacy, fairness.
Tool 4 — Journal template in Brali
- For each translation: domain, model, one‑sentence translation, assumptions, test plan, decision rule, confidence 1–5, ethical note.
We form micro‑habits around these tools. The important part is consistency: a 3‑minute lens every weekday leads to ≈15 translations each month.
Mini‑App Nudge Use a Brali micro‑module: "3‑Minute Lens" check‑in that prompts us to pick a problem, randomize three domains, and save one sentence translation. Set it to repeat on weekdays. It takes 3 minutes and logs the entry automatically.
Part 11 — Dealing with busy days: the ≤5 minute alternative When we cannot spare 20 minutes, use this tiny path (≤5 minutes).
Busy‑Day Path (≤5 minutes)
- Step 1 (1 minute): Pick a problem and open the domain deck. Randomize one field.
- Step 2 (2 minutes): Read one short paragraph about a model from that field (use a saved snippet in Brali).
- Step 3 (1–2 minutes): Write one sentence translation and set a 24‑hour reminder to revisit.
This path keeps the routine alive and preserves momentum. It produces low friction and is better than skipping.
Part 12 — Measuring progress: metrics and what to log We like simple numeric measures. Use these as primary metrics in Brali.
Suggested metrics
- Count: number of translations logged per week.
- Minutes: time spent in translation/test tasks.
- Test outcomes: percent of tests yielding directional improvement (e.g., +5% or more).
If we track weekly, a reasonable early benchmark is:
- Aim for 6 translations/week (≈3 per workday every other day).
- Aim for 1 micro‑test/week.
- Aim for a directional success rate of 20–40% on early tests (varies by domain).
Part 13 — Check‑in Block (use this near the end of your session)
In Brali LifeOS we will add the following check‑ins. Use them daily and weekly.
Check‑in Block
- Daily (3 Qs):
- What was the single problem we translated today? (text)
- Which outside field did we borrow from? (text)
- What did we test, and what immediate signal did we observe? (text, include minutes spent)
- Weekly (3 Qs):
- How many translations did we log this week? (count)
- How many micro‑tests did we run? (count)
- What percent of tests showed a directional improvement of ≥5%? (percent)
- Metrics:
- Count of translations logged (weekly)
- Minutes spent on sourcing/translating/testing (weekly)
Use these entries to reflect with the rubric (novelty 1–5, evidence 1–5, actionability 1–5).
Part 14 — How to onboard a team quickly When we work with a team, the cost is coordination. Here is a minimal protocol that scales.
Team onboarding (30–45 minutes)
- 10 minutes: a short primer on the three‑step routine and ethics filter.
- 10 minutes: randomize domains and create initial translations on two current problems.
- 10 minutes: design two micro‑tests and assign owners (one owner per test).
- 5–15 minutes: set Brali tasks, check‑ins, and a weekly cross‑check.
We assumed teams would resist a new process → observed that short, structured sessions with immediate outcomes (one micro‑test) overcame inertia → changed to a 30‑minute onboarding template that gets teams running fast.
Part 15 — How to improve signal quality over time Early tests are noisy. We will gradually increase sophistication.
Phase 1 — Rapid exploration (weeks 1–4)
- Focus: many translations, low‑power tests.
- Goal: generate directions and prune obvious losers.
Phase 2 — Focus and validation (weeks 5–12)
- Focus: invest in 3–5 promising ideas. Increase N or test sophistication.
- Goal: confirm effects; use better metrics.
Phase 3 — Scale and institutionalize (after 3 months)
- Focus: systemize working templates and integrate winning translations into playbooks.
- Goal: make these cross‑disciplinary moves a repeatable advantage.
We track progress by plotting translations/week and percent of validated tests over time.
Part 16 — Risks, trade‑offs, and ethical constraints (detailed)
We must be candid.
Risks
- Misapplied models can cause wasted resources or ethical harms.
- Rapid tests can produce false positives; decision rules must account for this.
Trade‑offs
- Speed vs certainty: faster tests mean more false leads; slower tests mean fewer ideas explored.
- Breadth vs depth: exploring many fields gives novelty; deep study builds robustness. Balance by cycling between quick exploration and focused validation phases.
Ethical constraints (simple checklist)
- Would this change exploit a known vulnerability? (yes/no)
- Does this require personal data we should not collect? (yes/no)
- Does the change respect consent and autonomy? (yes/no)
If any answer is "yes" we pause and add an ethical review figure (peer or committee)
before proceeding.
Part 17 — Putting it all together: a small weekly plan We will offer a sample schedule that is easy to follow.
Weekly plan (practical)
- Monday morning (9–9:30): 20‑Minute Pairing on a high‑priority problem. Save translations and tests.
- Wednesday (10–10:05): 3‑Minute Lens on a secondary problem.
- Friday (3–4 p.m.): Weekly Cross‑Check (45 minutes): review translations, score them, pick 3 ideas for the next week.
- Ongoing: run one micro‑test every week; log results daily in Brali.
This cadence balances novelty and refinement.
Part 18 — FAQs we hear and short answers Q: How many fields should I pick? A: Start with three; that number provides novelty without overwhelm.
Q: What if my team resists? A: Run a short pilot: one problem, one 30‑minute session, one micro‑test. Evidence often speaks louder than theory.
Q: How do I prevent junk analogies? A: Force explicit assumptions and one quick test. If assumptions fail, archive.
Q: Should we bring in domain experts? A: Yes, but treat them as translators, not authorities. Ask them: "If this model were true in our context, what specific predictions would we test first?"
Part 19 — Closing micro‑scene: a week later We are back at the kitchen table. It is Friday late afternoon. We open Brali LifeOS and run the Weekly Cross‑Check. Over the week we did eight translations and three micro‑tests. Two tests showed directional improvement (+7% and +12%), one was neutral. We score novelty and actionability and pick one win to scale next week. We feel a small, steady relief: the noise is thinning, and tangible patterns are emerging. We are curious, a little proud, and aware that the practice is a long game.
We end by agreeing on one simple promise: next week we will do at least three translations and a single focused micro‑test. We schedule those tasks now in Brali.
Part 20 — Implementation checklist (what to do in the next hour)
- Open Brali LifeOS: https://metalhatscats.com/life-os/cross-disciplinary-insight-lenses
- Create a task: "3‑Minute Lens on [priority problem]" (3 minutes)
- Randomize one outside domain from your domain deck.
- Write one sentence translation and set a 24‑hour revisit reminder.
- If you have 20 minutes, do the 20‑Minute Pairing on your top problem and design one micro‑test.
Mini‑App Nudge (again)
A tiny Brali module: "3‑Minute Lens" recurring check‑in (weekdays) that randomizes a domain and prompts one sentence translation. Use it to maintain cadence.
Check‑in Block (copy this into Brali LifeOS)
- Daily (3 Qs):
- What problem did we translate today?
- Which domain did we borrow from?
- What test did we run and what immediate signal did we observe?
- Weekly (3 Qs):
- How many translations did we log this week?
- How many micro‑tests did we run?
- What percent of tests showed a directional improvement of ≥5%?
- Metrics:
- Count of translations logged (weekly)
- Minutes spent sourcing/translating/testing (weekly)
One simple alternative path for busy days (≤5 minutes)
- Randomize one field, read one saved paragraph, write one sentence translation, set a 24‑hour revisit reminder. Done.
We assumed extensive background reading was necessary → observed that disciplined micro‑translations plus quick tests produce usable results → changed to the routine you have now. This is deliberate, practical, and trackable.
We will check in with the results. Over time this practice turns accidental insight into a predictable skill: a habit of disciplined curiosity that leads to better decisions.

How to Use All Relevant Knowledge Across Fields (Cognitive Biases)
- Count of translations logged (weekly)
- Minutes spent sourcing/translating/testing (weekly)
Hack #1007 is available in the Brali LifeOS app.

Brali LifeOS — plan, act, and grow every day
Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.
Read more Life OS
How to When Avoiding a Decision: - List Pros and Cons: Write Down Potential Harm from (Cognitive Biases)
When avoiding a decision: - List pros and cons: Write down potential harm from acting versus not acting. - Ask yourself: "Am I avoiding action because it feels safer, or is it genuinely the better choice?" Example: Ignoring a conflict at work? Compare the outcomes of addressing it versus staying silent.
How to Stay Sharp: - Take Notes: Write Down Key Points from the Person Speaking Before (Cognitive Biases)
To stay sharp: - Take notes: Write down key points from the person speaking before you. - Breathe and listen: Avoid rehearsing your own response while someone else is speaking. - Repeat mentally: After someone speaks, quickly repeat their main point in your head. Example: In a team meeting, note what the person before you says and reference it when it’s your turn.
How to Recall Better: - Test Yourself Often: After Reading, Close the Book and Write Down (Cognitive Biases)
To recall better: - Test yourself often: After reading, close the book and write down what you remember. - Use flashcards: Create questions for key points and quiz yourself regularly. - Rewrite, don’t reread: Summarize content in your own words instead of passively reviewing it. Example: If studying for an exam, write down key concepts from memory rather than rereading the textbook.
How to When Planning for the Future: - Acknowledge Change: Remind Yourself,
When planning for the future: - Acknowledge change: Remind yourself, "I will grow and change in ways I can’t predict." - Set flexible goals: Make plans that can adapt to future versions of yourself. - Reflect on past growth: Look at how much you’ve changed in the last five years as proof that growth is constant. Example: Five years ago, you might have had different priorities. Imagine how today’s plans could evolve just as much.
About the Brali Life OS Authors
MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.
Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.
Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.