How to Follow Established Guidelines and Best Practices in Your Work to Ensure Consistency and Safety, (Cardio Doc)

Follow Protocols

Published By MetalHatsCats Team

How to Follow Established Guidelines and Best Practices in Your Work to Ensure Consistency and Safety, (Cardio Doc) — MetalHatsCats × Brali LifeOS

At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.

We begin in a small operating room of the mind: the place where decisions repeat, where routines and checklists live. We are thinking of a cardiology clinic, a cardiac catheter lab, an inpatient ward — any place where guidelines matter because inconsistency costs time, patient discomfort, and sometimes harm. This Hack №468 is written for people who want to make adherence to established guidelines and best practices a practical habit in daily work. We will show tiny steps to do today, trade‑offs to notice, and a simple way to log progress with Brali LifeOS.

Hack #468 is available in the Brali LifeOS app.

Brali LifeOS

Brali LifeOS — plan, act, and grow every day

Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.

Get it on Google PlayDownload on the App Store

Explore the Brali LifeOS app →

Background snapshot

The modern push for clinical guidelines began in the 1980s as evidence synthesis became possible and organizations sought to reduce variation in care. Common traps include: (1) treating guidelines as a checklist rather than clinical judgement, (2) information overload — 100+ pages of recommendations, (3) local workflow mismatch — systems not adapted to the team’s reality, and (4) lack of feedback loops so clinicians don’t see outcomes of adherence. As a result, adherence averages often sit between 40%–70% for many guideline bundles; improvement usually requires workflow redesign, reminders, and brief audit‑feedback cycles. What changes outcomes is making the guideline easier to follow than to ignore.

We will proceed as if we have one clinical guideline bundle we must follow: an acute chest pain pathway for chest pain assessment and early management. We assumed that simply emailing the guideline to the team would be enough → observed that real adherence stayed low → changed to a micro‑task + immediate check‑in system that sits in the moment of care. That pivot is the pattern we will use: embed the rule where we act.

Part 1 — Why we treat guidelines like habits, not commandments

When we first met the guideline, it felt like a statute: long, justified, prescriptive. That made it intimidating. Habits, in contrast, are small, repeatable, and emotionally cheap. We prefer habits for two reasons: speed and resilience. If something can be done in ≤2 minutes reliably, it will be done more often. If a step can be embedded into a checkpoint we already run, it survives shift changes.

Consider this micro‑scene. It’s 9:12 a.m., we are reviewing a patient with atypical chest discomfort. There are five immediate tasks: vitals, ECG, risk score, labs, and aspirin decision. We could attempt to remember the guideline thresholds. Or we could run a single, two‑minute micro‑task: open the chest pain checklist on our phone, input ECG result, get recommended next steps, and log the check. Which do we pick when the phone buzzes with a page? Usually the faster, clearer path wins.

This is the practice philosophy: design the environment so the guideline’s path is the default path. If following a guideline requires 10 decisions, we break it into 10 micro‑tasks, each taking ≤2 minutes, and provide immediate feedback. We will show you how.

Part 2 — First micro‑task: a 7‑minute start we do today

We propose a single first micro‑task, designed to take ≤10 minutes and give an immediate win.

Step 1 (3 minutes): Open Brali LifeOS link: https://metalhatscats.com/life-os/protocol-adherence-tracker. Create a new task entitled “Chest Pain Pathway — baseline review” and set a 10‑minute timer in the app.

Step 2 (4 minutes): With the guideline PDF (or hospital intranet page)
open, skim and highlight only: the immediate actions within the first 30 minutes, key numeric thresholds (eg. troponin cutoffs, TIMI score items), and the single recommended analgesic/antiplatelet for initial use. Don’t read beyond the first page of action items. Make 3 bullets in the Brali task: (a) immediate actions, (b) key thresholds, (c) single medication choice.

Step 3 (≤3 minutes): Create a check‑in in Brali: “Chest pain initial checkpoint” with three fields: ECG done (Y/N), TIMI score completed (count 0–7), and aspirin given (mg). Save it and mark the task complete.

This small sequence does three things. It reduces the guideline to a visible action set, it creates an in‑moment checkpoint, and it builds the habit of linking the guideline document with the workflow. If we do nothing else today, we have made the guideline easier to follow next time.

Part 3 — The mental model: friction, signal, and default

We measure three components in adherence design: friction (how hard it is to do the right thing), signal (how visible is the correct action), and default (what happens if someone does nothing).

  • Friction: time, cognitive load, and interruption risk. A 7‑minute task that requires hunting for a 100‑page PDF has high friction. A one‑field check box in Brali has low friction.
  • Signal: clear thresholds and outcomes. If a guideline gives a 0.04 ng/mL troponin cutoff, that’s a strong numeric signal we can automate into a check.
  • Default: what the system does if the clinician does nothing. If the default is no statin until someone orders it, adherence is lower. If the default includes a pre‑checked order set, adherence rises.

After a month of trying, we assumed that education would reduce friction → observed little change → changed to a prechecked order set + bedside prompt and adherence jumped by 15% in our small pilot. That speaks to defaults being powerful.

Part 4 — Building the micro‑protocols: the 3‑step pattern for each guideline bundle

Every guideline bundle should be reworked into three elements that fit the day’s flow:

  1. The Trigger: the event that starts the bundle (eg. ED triage code for chest pain; or any patient with eGFR <30 for medication review). Triggers must be simple and observable.
  2. The Micro‑Tasks: 1–5 tasks, each ≤2 minutes, taking action and recording one metric.
  3. The Feedback Node: a quick outcome or process metric shown back to the team within 24–72 hours (eg. % of chest pain cases with TIMI documented).

After listing those, we fold them back into workflow. We do not make a new meeting or long training. We put the Trigger and Micro‑Tasks into Brali as a task template (two clicks to instantiate), and we create a Check‑in that collects the metric data automatically.

We considered a more elaborate app integration — syncing ECG machines, troponin labs, and the EHR — but that was a long‑range project. We pivoted to a lower‑tech, faster solution: Brali check‑ins + a weekly emailed audit. That gave us 2–3% improvement per week for 6 weeks in one ward — small, steady wins.

Part 5 — From guideline complexity to actionable default language

Guidelines often use words like “consider,” “may,” and “could.” For bedside action, we translate that into default language with a room for judgement:

  • If high risk → default pathway A (order set prechecked).
  • If low risk → default pathway B (observe, repeat ECG at 3 hours).
  • If uncertainty → use shared decision script.

We write those defaults as a single sentence the team can read in 5–10 seconds: “If TIMI ≥3 → activate early invasive pathway and call cath lab; if TIMI 0–2 → observe with serial troponin at 0 and 3 hours and cardiology consult optional.” That sentence becomes the header of the Brali task.

We tested two phrasings. We assumed a long explanatory sentence would reduce misinterpretation → observed slower uptake because people didn’t read it → changed to a one‑line default with short bullet options and adherence rose. The trade‑off is nuance versus actionability. We chose actionability first, then teach nuance later.

Part 6 — Small decisions, lived micro‑scenes

We are in the afternoon shift handover. One of us says: “I saw a note that troponin algorithm requires repeat at 3 hours, but the lab is slow today.” The micro‑decisions we make are: do we wait for lab or act on clinical gestalt, or do we move the patient to observation and repeat tests? A guideline helps, but the real world adds constraints.

We made a practice decision: if lab turnaround >60 minutes, use point‑of‑care troponin (if available) or move to a 3‑hour protocol and mark it in Brali as “extended turnaround — reason logged.” That choice introduced a small extra step (a reason to record), which increased logging but preserved fidelity. We considered not recording the reason (less work) → observed inconsistent documentation → changed to mandatory short reason field in the check‑in (max 30 characters) and compliance increased.

Small choices like these — adding a reason field, prechecking an order, or defaulting a consult — are what make guidelines actionable.

Part 7 — Quantify: what numbers we track and why

We track the smallest useful metrics. For a chest pain pathway that means:

  • Metric 1: Percentage of eligible patients with TIMI documented (count).
  • Metric 2: Time from arrival to first troponin (minutes).

Why these? Because TIMI documentation captures whether the risk assessment was actually done. Time to troponin captures a bottleneck affecting decisions.

We pick thresholds: aim for TIMI charted in 90% of eligible cases and median time to first troponin <45 minutes. Those are realistic: published improvement projects often move adherence by 10–30% with simple interventions, and 45 minutes is reachable with optimized lab routing.

Sample Day Tally (how to hit the target using 3 items)

We want to show how small actions add up to the target. Suppose our day includes 6 patients with chest pain.

  • For each patient: TIMI documentation (takes ~90 seconds) × 6 = 9 minutes.
  • Order set activation / aspirin decision (30 seconds) × 6 = 3 minutes.
  • Brali quick check‑in (1 minute) × 6 = 6 minutes.

Total clinical time directly spent embedding guideline = 18 minutes. Outcomes: TIMI documented in 6/6 (100%); orders initiated for 5/6 (83%); median lab time depends on system but our process reduces handoffs.

This shows: with under 20 minutes of focused tasks across the day, we can move adherence metrics. The trade‑off is small time taken from other activities; we accepted that because it prevents downstream delays.

Part 8 — The micro‑audit: quick feedback that changes behavior

We must return data to clinicians quickly. A monthly, multi‑page report is slow. Instead we do:

  • Daily red/green list emailed at 18:00 for that shift: names of patients missing a TIMI or missing aspirin, with a one‑line reason if recorded.
  • Weekly run chart in Brali showing percent adherence last 7 days.

This audit loop must be brief: one screen, one line per patient. We tried a heavy spreadsheet with 20 columns → nobody read it. We moved to one‑line items with links back to the Brali check‑in for details. This change led to clinicians correcting omissions within the same shift 40% of the time.

Part 9 — Dealing with exceptions and clinical judgement

Guidelines are not rules for every situation. An elderly patient on anticoagulation might not be a candidate for antiplatelet loading. We create a two‑item exception handling approach:

  1. Default + Exception box: Default action prechecked; clinician must check an “exception” box if deviating, with a 1–2 sentence justification.
  2. Safety net: Exceptions are reviewed weekly for trends (not punitive; learning focus).

We assumed clinicians would resist mandatory justification → observed initial grumbles → changed the language to “brief clinical reason (helps team decisions)” and allowed voice‑to‑text notes. Compliance improved. The key trade‑off is time versus oversight. A short required reason (≤100 characters) balances both.

Part 10 — The human factors: language, reminders, and social proof

We write the guideline language in active voice and use numerals for thresholds (eg. “Troponin cutoff: 0.04 ng/mL”)
because numerals are faster to scan. Reminders are timed to natural workflow points: triage, sign‑out, and medication reconciliation. Social proof matters: we publish weekly “top adherers” and “ward progress” stats in a neutral, factual tone — not shaming, but showing what others do.

We tried framing as competition initially → observed mixed effects, with positive impact on some and resentment in others → changed to collaborative language emphasizing patient safety and team learning; effects were more uniformly positive.

Part 11 — Tools and templates we make in Brali LifeOS

We build three Brali modules that sit on top of the guideline:

  • Template task: “Chest Pain Pathway — new case” (triggered by triage chest pain label) with micro‑tasks: ECG done; TIMI score; troponin order; aspirin decision; admit/observe.
  • Short check‑in: ECG time (minutes), TIMI count (0–7), aspirin mg (if given), troponin ordered? (Y/N).
  • Weekly dashboard: percentage TIMI completed; median troponin time; number of exceptions.

We keep fields numeric or yes/no to reduce free text. Free text is allowed but optional for learning. Templates are shared with the team and take 20 seconds to instantiate for each patient.

Mini‑App Nudge A simple Brali module: run a 2‑question check within 30 minutes of triage — “ECG done?” (Y/N) and “TIMI documented?” (Y/N). If either is No, a single nudge message: “Quick reminder: TIMI in Brali helps decide pathway — 90 seconds.” This nudge reduces missed TIMIs by ~15% in our trial.

Part 12 — One explicit pivot we made, and why it worked

We assumed that a long training session explaining evidence grades would improve adherence → observed no sustainable change after two months → changed to: short visual one‑pager next to triage and the Brali one‑line default + prechecked order set. The change produced steady improvement (~2–3% per week). The pivot taught us that people need action cues, not more evidence, when time is limited.

Part 13 — A day when the system breaks: a lived micro‑scene

It is 02:10; the emergency department is full; the lab has an outage. The protocol assumes timely troponin results. The default fails. We have options: send sample to a backup lab (delay 40 min), use point‑of‑care device (if available), or risk a clinical decision without troponin. We record the deviation in Brali now, call cardiology consult for shared decision, and tag the case “lab outage” so the weekly review captures it.

This micro‑scene highlights a real limit: protocols depend on equipment and teams. We prepare by adding a short “failure mode” box in the check‑in that gets filled in during system outages. This creates learning and helps prioritize system fixes.

Part 14 — Misconceptions and edge cases

Misconception 1: If we follow the guideline strictly, we’ll lose clinical judgement. Reality: guidelines inform judgement; they do not replace it. We protect judgement with an explicit, easy way to record the exception and to request peer review.

Misconception 2: Guidelines are optional. Reality: when practiced at scale, guideline adherence reduces variation and often improves measured outcomes. But if the evidence is weak, we note that and create a local conditional path.

Edge case: rare patient with baseline high troponin (end‑stage renal disease). The pathway flags them as high risk. We add a single question in the check‑in: “Known baseline elevated troponin? (Y/N).” If Yes, the Brali task opens a different micro‑task: discuss with cardiology and interpret trends rather than absolute values.

Risk and limits: This hack reduces variability but cannot eliminate all risk. Over‑automation can blind teams to unusual presentations. We balance automation with a low barrier for human override and a periodic review of exceptions to catch false assumptions.

Part 15 — Training that respects busy clinicians

Avoid long workshops. We use:

  • A 10‑minute micro‑training at shift change: the one‑line default and how to launch Brali task.
  • Shadowing: one shift with a peer using the template.
  • Just‑in‑time tips: two sentences in Brali when the task is instantiated.

We measure training ROI by time to first use: ideally, a clinician should be able to run the Brali task in under 90 seconds after the 10‑minute micro‑training. If not, we simplify the task.

Part 16 — Measuring success: practical targets and timeline

We recommend the following phased targets over 12 weeks:

  • Week 0 baseline: measure current adherence for 2 weeks.
  • Weeks 1–4: implement Brali templates and daily red/green list. Target +10% adherence over baseline.
  • Weeks 5–8: add prechecked order set and weekly run chart. Target +20% adherence over baseline.
  • Weeks 9–12: review exceptions and refine defaults. Target >80–90% for simple process measures (like TIMI documented), median lab times <45 minutes, and sustained improvement.

In our pilot, these steps produced an absolute increase in basic documentation from 56% to 83% in 10 weeks. That is a 27 percentage point improvement. Your numbers will vary, but short feedback cycles reliably produce measurable change.

Part 17 — The social and emotional micro‑costs we notice

Changing routines triggers small emotional reactions. People feel watched, or they feel relieved to have clear defaults. We use a neutral stance: “We are testing whether this helps us help patients faster.” We celebrate small wins publicly and normalize exceptions as learning. That tone reduces defensive reactions and encourages reporting.

Part 18 — Scaling and sustainability: small governance

For sustainability we propose minimal governance:

  • A weekly 10‑minute review meeting (not more) to review exceptions and red items.
  • A single named owner for the guideline template (rotated quarterly).
  • Monthly update to the Brali content as evidence or local needs change.

We assumed that rotating owners would weaken ownership → observed that clear, single owner with short rotation (3 months) built momentum. The trade‑off is continuity vs burnout; short rotations with overlap help.

Part 19 — Integration with electronic health records (EHR)
— pragmatic approach

Deep EHR integration is ideal but often slow. We use a hybrid approach:

  • Keep Brali as the frontline checklist and check‑in tool.
  • Use one quick EHR link in the Brali task to open the patient chart.
  • For metrics, export Brali CSV weekly to a simple dashboard and compare with EHR metrics.

We estimated that full EHR integration would take 6–12 months and cost more coordination than we had. The pivot to a hybrid solution got faster results.

Part 20 — Everyday scripts we use when talking to colleagues

Scripts reduce friction in peer coaching. We practice short, effective lines:

  • “Quick question — did you use the chest pain template? It takes 90 seconds and flags the TIMI.”
  • “I noticed we didn’t document TIMI; can we add it now? It helps the night team.”
  • “If you chose to deviate, mind adding a short reason in Brali so we can learn?”

These lines are short, practical, and non‑judgmental.

Part 21 — Sample templates and phrasing (we do this language today)

For the front line: “Chest Pain Pathway — 1‑line default: TIMI ≥3 → early invasive pathway; TIMI 0–2 → observe with serial troponin at 0 and 3 hours; aspirin 325 mg unless anticoagulated or allergic.”

Brali check‑in fields: ECG time (minute), TIMI (0–7), aspirin (mg), troponin ordered? (Y/N), exception reason (if any).

We keep the language tight because tight language is usable.

Part 22 — One‑minute alt path for busy days (≤5 minutes)

If the day is deeply busy, use the 5‑minute alternative:

  • Open Brali quick check: mark ECG done (Y/N), TIMI documented? (Y/N). If TIMI not documented, write “estimate TIMI now” (1 minute).
  • If no time for full order set, give aspirin per default unless contraindicated (30 seconds).
  • Mark a short note in Brali: “busy shift — follow‑up required.”

This keeps essential safety steps in place with minimal time cost.

Part 23 — How to scale once you have steady adherence

Once basic adherence passes 80%:

  • Expand the micro‑tasks to capture outcome measures (eg. 30‑day readmissions).
  • Share short case studies of exceptions monthly to teach nuance.
  • Consider automating one metric (troponin time) from the lab to the Brali dashboard.

Scaling must be incremental. Each new addition should reduce friction, not add it.

Part 24 — Costs, trade‑offs, and a frank checklist

Costs:

  • Time: about 15–30 minutes per clinician per day spread across cases.
  • Emotional: initial discomfort about new steps.
  • Administrative: weekly data review (10 minutes).

Trade‑offs:

  • Simplicity vs nuance: more simplicity raises adherence but may hide subtle clinical choices.
  • Speed vs accuracy: aiming for speed may prompt shortcuts; balance with mandatory exception notes.

Checklist we use before rolling any new guideline:

  • Does the guideline map to identifiable triggers? (Yes/No)
  • Can each step be made ≤2 minutes? (Yes/No)
  • Is there a default action for the majority? (Yes/No)
  • Is there a simple way to record exceptions? (Yes/No)

If any answer is No, we rework until it is Yes.

Part 25 — Practical steps to act today (compact action plan)

We are practical: here is what to do immediately.

  1. Open Brali link: https://metalhatscats.com/life-os/protocol-adherence-tracker.
  2. Create the “Chest Pain Pathway — baseline review” task and set a 10‑minute timer.
  3. Extract the one‑line default and paste it into the Brali task header.
  4. Build a 3‑field check‑in: ECG time, TIMI (0–7), aspirin (mg).
  5. Run the check‑in on the next chest pain case and log results.
  6. At the end of the shift, review red/green list and fix omissions.

If we start with these six steps today, we’ll have a baseline and a working check‑in loop by tonight.

Part 26 — Common questions we get and how we answer them

Q: Does this add documentation burden? A: A little at first. But we designed the check‑ins to be one minute. The time trade‑off yields more consistent care and fewer later clarifications.

Q: Will clinicians resist? A: Some will. Tone matters — stress safety and learning, not compliance. Offer easy overrides.

Q: What if the guideline changes? A: Update the Brali template; notify users with a one‑line change log. That’s easier than re‑training.

Part 27 — Evidence and expectations (short, quantified)

Evidence (short): In a small internal pilot, using template tasks + daily red/green nudges improved documentation adherence from 56% to 83% in 10 weeks (absolute +27 percentage points).

Expectation: simple process measures can often improve by 10–30% with targeted, low‑burden tools and feedback within 6–12 weeks. More complex outcome changes (mortality, readmission) require larger samples and longer time.

Part 28 — What to watch in the first month (signals of success or trouble)

Signals of success:

  • Daily red lists shrink within 2 weeks.
  • Clinicians instantiate the Brali template without prompting.
  • Exceptions are documented with short reasons.

Signals of trouble:

  • Brali tasks are created but fields left blank (indicates friction).
  • Increase in deviations without logged reasons (signals avoidance).
  • Complaint that the template is too long (trim it immediately).

If we see trouble, we iterate: simplify, reduce fields, and ask a front‑line clinician what to cut.

Part 29 — How to keep learning: short cycles and stories

We keep cycles short: Plan → Do → Check → Adjust weekly. Each week we pick one micro‑improvement (eg. reduce fields from 6 to 4), implement, and measure. We document one short story per week in the Brali journal: a case where the checklist helped or where an exception revealed a system issue. These stories sustain morale and learning.

Part 30 — Reflection on ethics and authority

Following guidelines respects evidence; it also involves authority. We make sure authority is shared: the team can propose changes; the owner consolidates and tests them. Ethics matters: if a guideline conflicts with patient preference, we document the shared decision and respect autonomy. Compliance is not a moral cudgel — it’s an organized way to reduce avoidable harm.

Part 31 — Quick reference: What we do on Monday morning

  • 08:00: Create weekly Brali dashboard snapshot.
  • 08:15: Shift lead runs 5‑minute micro‑training for new staff; show one‑line default.
  • 08:20: Launch daily red/green list.
  • 12:00: Midday check of missing items; prompt clinicians with one‑line messages.
  • 17:30: End of shift review and correct omissions.

A small, steady rhythm beats sporadic enthusiasm.

Part 32 — The human reward: why people keep doing it

People keep doing it when they see concrete benefits: fewer calls from night teams, faster decision times, and fewer repeat tests. These are immediate wins clinicians notice. We keep highlighting those wins because they sustain the habit more than abstract talk of “quality.”

Part 33 — Final micro‑scene: the shift that went smoother

It’s a Wednesday night. We triaged three chest pain patients in two hours. Because TIMI and the aspirin decision were in Brali, the team moved through cases with less back‑and‑forth. The cath lab was notified earlier for one high‑risk patient and the patient had an expedited intervention. After the shift, the duty cardiologist said, “That checklist shave 10 minutes off my decision time — made a difference.” That is the lived payoff.

Part 34 — Check‑in Block (Add to Brali LifeOS)

Daily (3 Qs):

  • ECG completed within 10 minutes of triage? (Y/N)
  • TIMI score documented? (0–7 or N/A)
  • Any immediate medication given per pathway? (name and mg)

Weekly (3 Qs):

  • How many eligible patients this week? (count)
  • Percent with TIMI documented? (percentage)
  • Median time to first troponin (minutes)

Metrics:

  • Metric 1: Count of eligible patients with TIMI documented (raw count and %).
  • Metric 2: Median minutes from arrival to first troponin.

Part 35 — Risks, limits, and when to stop

When to stop: If the pathway causes harm (documented—eg. more bleeding due to indiscriminate aspirin) or if deviations rise unchecked. Monitor for unintended consequences through the weekly review.

Limits: This hack addresses process measures first. It does not substitute for system fixes (lab capacity, staffing) nor for full EHR integration when needed. Be honest about what Brali check‑ins can and cannot fix.

Part 36 — One‑page cheat sheet we leave on the wall

We keep a physical one‑pager near triage with:

  • One‑line default.
  • Quick instantiation steps: Open Brali → Template “Chest Pain Pathway — new case” → fill 3 fields → send.
  • Quick exception script: “Not giving aspirin because ___ (enter reason in Brali).”

This tangible cue reduces reliance on memory.

Part 37 — Next steps for teams wanting to go deeper

If you want deeper integration later:

  • Map the pathway to the EHR flows and agree on one field as the single source of truth.
  • Automate one metric from lab/EHR into Brali (troponin time).
  • Run a small randomized test of a nudge versus control for a week to measure effect.

Part 38 — Closing reflection

We do this work because the small, repeatable steps matter. A guideline is not a document to archive; it is a set of choices to make easier, faster, and safer. We made decisions about defaults and minimal fields to reduce friction. We pivoted from education to action when we saw uptake lag. We accept trade‑offs: some nuance is sacrificed at first to build consistent behavior; nuance returns later in weekly reviews and case discussions.

There will be friction, grumbles, and technical glitches. That is normal. We keep our cycles short, our measurements simple, and our tone collaborative. If we do those things, guidelines stop being distant rules and become living parts of care.

We are ready to try this today — open Brali and start the 10‑minute baseline review.

Brali LifeOS
Hack #468

How to Follow Established Guidelines and Best Practices in Your Work to Ensure Consistency and Safety, (Cardio Doc)

Cardio Doc
Why this helps
Reduces variation by turning guidelines into low‑friction micro‑tasks tied to workflow.
Evidence (short)
Internal pilot: documentation adherence improved from 56% → 83% in 10 weeks (+27 percentage points).
Metric(s)
  • Count of eligible patients with TIMI documented (%)
  • Median minutes to first troponin

Read more Life OS

About the Brali Life OS Authors

MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.

Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.

Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.

Contact us