How to Regularly Review How Things Are Typically Done and Question If There's a Better Way (Be Creative)
The Checklist Method
Hack №92 — How to Regularly Review How Things Are Typically Done and Question If There's a Better Way (Be Creative)
We have a moment most workdays where we catch ourselves repeating a habit we never decided on. The keyboard shortcut we stretch for but never bind. The weekly meeting that started as a check‑in and grew into an hour of vague updates. The process we inherited that still requires three spreadsheets, two Slack threads, and a calendar reminder, even though we could merge the sheets and automate the reminder in 10 minutes. We usually sigh, tell ourselves “later,” and keep going. Today, we propose something quieter and more deliberate: we review how things are typically done and gently ask if there’s a better way—then we test, not argue.
At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.
Background snapshot: The habit we’re building here comes from continuous improvement (Kaizen), lean thinking, and decision hygiene. In teams, it shows up as retrospectives and after‑action reviews; in personal work, it’s the small moment when we ask, “Is this step necessary?” Most attempts fail not because we don’t care, but because we attempt giant redesigns, hold endless debates, and don’t protect time for small experiments. The pivot that changes outcomes is compact: reduce scope, set a timer (5–15 minutes), and validate the next best alternative with one tiny test and one metric. Creative progress often looks less like inspiration and more like consistent micro‑edits to reality.
We picture a modest afternoon: the kettle clicks off. A notification on our phone says “Review & Question — 7 minutes.” We almost swipe it away. We don’t. We open a short checklist, select one routine that annoyed us this morning—let’s say, composing status emails that nobody reads—and we run a tiny experiment: a 120‑word version with a header and a single request, sent to two recipients with a short P.S. asking if this version worked better. We set a follow‑up for Friday to compare responses. It cost seven minutes. We feel a faint relief. Something was shaped; not grand, but precise.
This is not a manifesto. It’s a practice. We will build it piece by piece, and we will track it. Our goal today: set up one micro‑routine to review a single “typical” task, question it, and test a better version before the day ends.
Hack #92 is available in the Brali LifeOS app.

Brali LifeOS — plan, act, and grow every day
Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.
The quiet engine: a loop we can actually run
We need a loop we can run even on a bad day. Pretend we have only 10 minutes. We still want a full cycle: choose → question → redesign → test → note result. We’ll keep the vocabulary spare and the numbers small.
- Trigger: a fixed micro‑window, 5–12 minutes, preferably near an existing routine (after lunch, after first break). We choose a time we actually hit 4 days out of 5.
- Target: one workflow or artifact under 15 minutes, or one step inside a longer workflow (rename files, send updates, prep ingredients, warm up for a run, open the analytics dashboard).
- Question: “What are the minimum steps that produce the same or better outcome?” We do not re‑architect the universe; we shave one corner.
- Test: swap one step or one constraint, run it once, track one metric (count, minutes, clicks, replies).
- Debrief: two sentences in our journal: “What changed? Keep or revert?” That’s it.
This loop has two properties we like. First, it fits in a lunch break. Second, it stacks. If we run it 3 times per week, we get ~150 micro‑changes per year, even if we skip a third of the weeks. If each change saves 1–4 minutes or reduces one small source of friction, the compounding is real. If a third of them fail, we keep the cost capped.
We initially assumed we needed a complex board with tags and priorities to manage all potential improvements → we observed we avoided the board and lost momentum → we changed to a daily “pick just one annoyance and test one tweak” approach. The board came back later, but only as a light list we review once a week.
The beginning: we pick one annoying routine and pare it down
Let’s step into a real scene. It’s 3:10 p.m., we are staring at a document titled “Weekly Reporting.” We hate it, but we can’t skip it—it unblocks people. Typical steps (be honest):
Copy to email, address team, send (3 minutes).
Total time: 23 minutes, plus emotional drag. We accept this as “how it is.” What if we question three points:
- Do we need a full narrative each week, or can we alternate detailed and short formats?
- Can one document auto‑pull task statuses?
- Can we switch to a standardized 3‑bullet template with one explicit ask?
We don’t have to answer all three now. We pick one change for today. Let’s choose the template swap. We will use:
- 1 line: Objective this week.
- 1 line: Progress (with count/percent).
- 1 line: Blockers (with a single ask).
- Optional P.S.: What changed since last week.
We set a timer for 9 minutes. We run the new format once. We send it. We ask two recipients to reply with a 1–5 rating for clarity and whether the ask is clear (Y/N). Our metric becomes: minutes spent and number of replies/clarity. If clarity scores under 4 or the ask is ignored, we revert or iterate next week. We write a 2‑sentence debrief in Brali.
This is the whole action today. Our creative judgment shows up in choosing what not to change yet. We contain the experiment and make feedback quantitative (2 ratings). If we feel an urge to redesign everything, we note it, then return to our 9‑minute box.
Mini‑App Nudge: In Brali LifeOS, add the “One Step Better” micro‑module and set the timer to 9 minutes. It preloads the two debrief questions so we don’t type them fresh each day.
Why this works: constraints make creativity useful
We sometimes resist the word “creative” because it sounds like a mood. In practice, creativity under constraints outperforms free‑form ideation for operational work. Constraints narrow the search space; tiny tests reduce politicking; metrics prevent drift. There’s also the psychology: finishing a small experiment is rewarding, and reward predicts repetition.
- Timebox: 5–12 minutes is short enough to start, long enough to produce a change.
- Scope: one step or one routine keeps risk low; we can reverse quickly.
- Metric: one number (minutes, clicks, replies, defect count) makes it real; two numbers can be okay, but one is easier to track daily.
- Debrief: two sentences under a hard limit forces synthesis.
We can tolerate a 30–40% failure rate in micro‑tests. If we run three tests a week, we will likely have one clear win, one wash, and one loss. Wins compound. Losses are bounded by the timebox.
We can be frank about the cost. Reviewing and questioning costs minutes; not every week has slack. But we also know routine inefficiencies cost us daily. If we save 3 minutes per day on a recurring task and we run it 200 days per year, that’s 600 minutes (10 hours) reclaimed. If one 10‑minute tweak per week yields that, the ROI is high. If we accumulate five such wins a year, we have 50 hours to redeploy to deep work, rest, or learning. That is not a thought experiment; it’s arithmetic.
Choosing the right candidates: small, frequent, and annoying
We don’t need to improve everything. Start with the intersection of three filters:
- Frequency: happens at least weekly.
- Annoyance: small friction we feel in the body (sigh, shoulder tension).
- Controllability: we can change it without waiting for approvals.
Candidates include:
- Status updates, meeting prep, agendas, handoffs, weekly reviews.
- File naming, folder structure, screenshots/recordings, snippet templates.
- Meal prep, gym warm‑up, nightly shutdown routine, device charging.
- Data pulls, dashboards, query templates, code snippets, Git hooks.
We can pick one today. If we draw a blank, we scan our last 24 hours. Where did we mutter “this again”? That’s it. If truly nothing comes to mind, we can still practice the loop on something tiny, like binding a new shortcut for an action we do daily (e.g., archive email, rename files).
We avoid one trap here: arguing about the best change before testing any change. We don’t have to choose the perfect idea. We pick the smallest safe tweak and ship it.
The day we learn to measure tiny things
Measuring micro‑changes feels silly until we see the curve. We pick simple, boring metrics:
- Minutes to complete the routine (measured once per day or per run).
- Counts that matter: number of clicks, number of steps, number of errors, number of replies, number of files, number of people CC’d.
- Rates where possible: response rate (%), on‑time rate (%), error rate (%), completion rate (%).
We don’t need a dashboard yet. A lines‑in‑journal approach is fine:
- 2025‑10‑06 Mon: Status email v2. Minutes: 12 (down from 23). Replies: 2/2 responded, clarity avg=4.5/5.
- 2025‑10‑07 Tue: File rename macro. Minutes: 2 (down from 6). Errors: 0 (down from 1 yesterday).
Over a week, we can see patterns. Over a month, we can choose what to keep, automate, or teach.
A small case study in three scenes
Scene 1 — Editing calendar invites: We notice recurring 1:1 invites have vague titles. People ask “Agenda?” in chat. We assume the solution is a shared doc with prompts. We test a different change: embed a two‑line agenda in the calendar invite itself, using a structured prefix with two tags: “[Focus] [Decision]”. We track “number of times someone asks for agenda” (baseline: 3 per week). After one week, the asks drop to 1 per week. We keep it and standardize the prefix. Minutes saved: 3–5 per meeting prep.
Scene 2 — Daily capture folder clutter: Screenshots pile in Desktop. We assume we need a complex file system. Instead, we change one setting: default screenshot location to “Screenshots/Inbox,” with an Alfred or Spotlight keyword to open it. We create a daily 2‑minute sweep labeled “2@Screens.” Metric: number of loose screenshots on Desktop at EOD. Baseline: 15. After change: 0–3. Emotional relief: noticeable; we no longer feel messy.
Scene 3 — Handoffs: A small team ships a feature. Handoffs to QA are chat‑based and late. We assume the fix is a new ticket template. We test a one‑line change: add a “Definition of Done” 3‑checkbox line in the dev PR template: “Screens updated? Test data included? Rollback path?” Metric: QA ping count for missing info per week (baseline: 7). Week 1 after change: 3. Week 2: 1. We decide to keep it, then later turn it into a tool prompt.
Across these scenes, a pattern holds: We assumed X → observed Y → changed to Z. The discipline is not in being right first; it’s in closing loops quickly and leaving breadcrumbs.
One pivot we needed to make early
We thought our improvements needed perfect buy‑in to try them. We observed that when we asked for permission or consensus before running a tiny test, the conversation dragged and energy fell. We changed to a “quiet trial” policy for low‑risk tweaks: we run the change for a week in our own work, we measure, and we share the result—only then do we propose adoption. People respond to evidence and reduced uncertainty. Consensus became smoother after we carried the small burden of proof.
This pivot protects creative energy. Debate is valuable, but micro‑experiments thrive when they don’t need a committee.
Lightweight tooling: keep it boring
We can live inside our existing tools with one addition: a short daily check‑in and a weekly review. That’s what Brali LifeOS is for, but the rules fit anywhere.
- Daily micro‑task: 5–12 minutes. Title: “One Step Better.” Content: pick → question → test → log. Metric: minutes and one outcome number.
- Weekly review: 20–30 minutes. Title: “Keep, Improve, Drop.” Content: scan the week’s micro‑tests; choose 1 to standardize, 1 to iterate, and 1 to drop or revert. Record wins in a “Library” section.
We avoid heavy architecture. A single project with a short checklist and one journal view keeps it nimble. Later, if we want, we can tag by domain (Writing, Meetings, Home, Health). We don’t start there.
The 3–Day quick start
Day 1 (today): Pick one routine; run a 9‑minute test; log 2 sentences and 1 number.
- Example: Inbox triage. Change: collapse notifications; enable “archive with E” shortcut; process in 3 10‑minute bursts. Metric today: minutes to empty top 25 emails. Target: under 12 minutes.
Day 2: Repeat on a different routine OR iterate yesterday’s change with one more constraint.
- Example: Add a “3‑line reply template” with a clear ask at the end. Metric: response time from 2 key stakeholders (baseline: 22 hours; aim: under 12).
Day 3: Weekly mini‑review (15–20 minutes). Decide: keep, improve, or drop. Standardize one change.
- Example: Decide status emails will use the 3‑line template every Monday, and we’ll only write a narrative on the first Monday of the month. Add a template snippet to our notes app. Add a reminder to Brali.
We avoid building a catalog first. We will build it after we have evidence of 2–3 wins. That way, the catalog remains alive and small.
The anatomy of a good question
When we ask, “Is there a better way?” we often tilt toward technology. That’s fine; tools help. But better often lives at the level of constraints, not apps. We can use these prompts to force useful changes:
- Scope: Can we reduce the width? “Same outcome, fewer words/clicks.” Example: 120‑word limit for updates.
- Timing: Can we shift earlier/later? “Batch late‑day brainless tasks to 4:30 p.m.” Example: file renames and screenshot sweep.
- Defaults: Can we change the default state? “Calendar titles include [Focus] + [Decision].”
- Inputs: Can we add a single missing input? “Include last week’s goal in this week’s summary.”
- Outputs: Can we define done in a checklist of 3? “Done if X, Y, Z.”
- Hand‑offs: Can we remove one negotiation? “Set a standing agenda pattern, no add‑ons in chat.”
We choose one lens per test. We don’t stack more than two lenses in a 10‑minute window; that’s how experiments blur.
A sample day tally
Let’s pretend it’s a Tuesday. We want to hit a daily target: 1 micro‑review test, 1 metric logged, under 12 minutes. Here’s what it could look like:
- 11:50 a.m. — Micro‑review: “Slack interruptions.” Change: set Slack to only notify @mentions and DMs for 2 hours; create a quick status “Heads down until 2 pm; urgent call me.” Time spent: 6 minutes (opening settings, adding status).
- 2:10 p.m. — Observe: interruptions reduced (2 DMs instead of 7 pings). Metric: number of interruption pings 12–2 p.m. Baseline: 7. Today: 2.
- 5:20 p.m. — Debrief in Brali: “Notif filter reduced pings from 7 to 2 in 2 hours; no missed urgent issues. Keep for afternoons.” Time: 1 minute.
Totals:
- Tests run: 1
- Minutes spent: 7
- Outcome: −5 interruptions during focus window
This is enough to “count” for the day. The simplicity keeps the habit alive.
The busy day alternative (≤5 minutes)
We will have days where nothing fits. On those days:
- Bind a new shortcut for one frequent action (e.g., “Archive email (E),” “Mute/Unmute (Ctrl+M),” “Open Daily Doc (Ctrl+Alt+D)”). Time: 3–5 minutes.
- Metric: number of times used today (estimate acceptable). Log: “New shortcut bound. Used 6 times. Felt smoother.” That’s it.
This keeps the loop intact: choose → change → measure → note.
Edge cases and limits
- Shared processes: If the routine is shared, we run the test privately where possible (our slice of the process), measure, and then share evidence. We avoid surprising others with unilateral changes to team‑critical pathways. Low‑risk changes: templates, checklists, timing windows, naming conventions in our own files.
- Compliance and risk: If regulatory or safety rules exist, we do not change steps that impact safety, audit trails, or approvals without review. Our arena is often the experience layer: clarity, handoffs, batching, timing, labeling, templates.
- Novelty seeking: It’s easy to chase newness. We guard against changing for change’s sake with the “3‑week rule”: standardize a change only after it has survived three weeks of normal conditions.
- Data poverty: Some outcomes are hard to measure immediately. We can use proxies. If we can’t get ROI, we get friction reduction (self‑reported on a 1–5 scale) as a placeholder.
- Plateau: Improvement can stall after we harvest low‑hanging fruit. That’s normal. We switch domains (e.g., from Meetings to Home), or we revisit one mature process for deeper re‑design once a month in a 45‑minute slot.
Misconceptions we can drop now
- “Creative improvement requires big ideas.” No. It requires small questions asked regularly. Big changes emerge from stacked proofs.
- “We need buy‑in first.” For low‑risk trials, we need a timer and a notebook first. Evidence invites buy‑in.
- “If it’s not automated, it’s not improved.” No. A checklist with three explicit lines can outperform clumsy automation.
- “We should fix the worst process.” Sometimes. But high‑frequency, medium‑annoyance processes yield more cumulative gains and are easier to change safely.
We can be empathetic to ourselves here. We were taught to think of creativity as novelty. In operations, creativity is clarity plus deletion.
A week in scenes: building rhythm
Monday 8:55 a.m. We open the laptop with a small vow: we will ship one 9‑minute improvement by noon. Calendar titles are vague; we append “[Focus: Draft Review]” to two meetings and add a single “Decision” line to each invite. We log 9 minutes. We feel a tiny lift: future us will be less confused.
Tuesday 1:10 p.m. We stand with a sandwich looking at a life admin list. We choose passwords. We realize password reset emails always bury the reset link below a banner. Our test: set email client to auto‑collapse images in password reset emails for an hour. Metric: minutes to reset 2 passwords. Baseline: ~7 minutes. Today: 3. We jot it down. We decide to keep images off in that folder.
Wednesday 4:35 p.m. Writing hours are noisy. We set a 90‑minute block with phone in another room and a kitchen timer. We do not track words; we track minutes of uninterrupted writing above 20 minutes. Baseline: 0. Today: 2 blocks. Our test was not about output; it was about input conditions. We write a note: “Keep phone far during writing. The physical distance helps.”
Thursday 11:15 a.m. A handoff to a colleague is messy. We usually dump a link and say “LMK.” We test a “Handoff 3” template: One sentence context, two bullets changes, one bullet ask. We send it. We watch response latency. Baseline: 26 hours. Today: 4 hours. The colleague says, “This is clear, thx.” We feel warmth.
Friday 3:05 p.m. We do a weekly “Keep, Improve, Drop.” We open Brali and sort the week’s tests: Monday’s meeting titles (keep), Tuesday’s images off for password resets (keep), Wednesday’s phone distance (keep), Thursday’s handoff template (keep), one failed test where we tried to route all Slack to email (drop—latency too high). We formalize two changes by adding templates/snippets.
This rhythm requires practice, but after two weeks it feels less like another task and more like articulating our own preferences.
Creative questions for common domains (use sparingly)
We include these as springboards, not checklists. Choose one and test.
Writing and communication:
- Can we cut any status update to 120 words and add one explicit ask?
- Can we replace two weekly emails with one shared dashboard link?
- Can we write subject lines with [Verb] + [Object] + [By When]?
Meetings:
- Can we enforce an early decision point at minute 10 for 30‑minute meetings?
- Can we send agendas in the calendar invite, not in chat?
- Can we default to 25/50‑minute blocks to protect transitions?
Files and versions:
- Can we add a YYYY‑MM‑DD prefix and a version suffix v0.1, v0.2?
- Can we auto‑rename screenshots during capture?
- Can we keep a “Staging” folder where files live for 7 days before archiving?
Health and energy:
- Can we lock a 150‑second stretch after long sits?
- Can we set lights to shift warmer at 8 p.m. automatically?
- Can we prepare a 220‑gram vegetable bowl at lunch to avoid afternoon slump?
Home logistics:
- Can we set a 12‑minute “reset” timer after dinner?
- Can we keep a “spares basket” for cables and label it by port?
- Can we batch small returns and errands on one weekday morning?
Again, we choose one, run a tiny test, and log a number. The discipline is in leaving many good ideas for later.
A constraint that helps: budgets for change
We borrow a page from personal finance: assign budgets to improvement experiments.
- Time budget: 60 minutes per week total. That’s six 10‑minute tests or a mix of three short and one longer.
- Risk budget: 0 for compliance, 1 for reversible changes, 2 for changes that might annoy others, 3 for changes that could break something. We stick to 0–1 during our first 2 weeks.
- Social budget: 1 “ask for feedback” per day max. We avoid fatiguing our circle with too many meta‑requests.
This protects our relationships, our schedule, and our credibility. It also makes “no” easier. If someone asks us to overhaul the onboarding doc today, we can say, “My improvement budget is capped this week; I can contribute a 9‑minute test.”
Handling resistance—ours and others’
We will feel friction. We will hear a voice: “This is small. It won’t matter.” We can remember that small changes add up, and small changes can also spark bigger shifts. We can track one tangible benefit per week: minutes saved, errors avoided, or a quote from someone who appreciated clarity.
Others may resist too. We can speak in outcomes, not ideology. “We tried a 120‑word status with one ask. We got replies in 4 hours instead of a day. Want to try for two weeks?” That’s different from “We’re switching to my template now.”
If someone refuses categorically, we can run the test in our slice and wait. Results soften edges.
What we automate and when
We do not automate first. We automate after a change has stabilized for 3 weeks with clear benefits. Then:
- Snippets: Text expanders for templates (e.g., “;status3” expands to the 3‑line status).
- Macros: File renames, screenshot moves, folder creation (e.g., “Project/YYYY‑MM‑Week N”).
- Shortcuts: One‑key toggles for mute, video on/off, screen record, daily doc open.
- Schedules: Calendar blocks that trigger focus modes and status messages.
Rule of thumb: automate when a manual step repeats ≥3 times per week and costs ≥2 minutes each time, and when the failure cost of the automation misfiring is low. We err on the side of later to avoid tool churn.
When the change doesn’t stick
Sometimes the change “works” technically but we stop using it. We can diagnose:
- Friction: The new step requires an extra click or context switch.
- Visibility: We forget about it (out of sight, out of mind).
- Alignment: The change benefits others but not us, or vice versa.
- Timing: The window for the routine moved.
We can respond by:
- Removing one click (pin, shortcut, toggle).
- Placing a cue where we work (sticky note on monitor, pinned note in app).
- Negotiating benefits (make the ask explicit, share wins).
- Moving the micro‑review window to a more honest time.
We don’t scold ourselves. We adjust the environment.
A numbers‑based vignette: estimating real gains
Let’s quantify gains over a month with plausible numbers. Suppose we run 12 micro‑tests (3 per week). Outcomes:
- 5 clear wins: each saves 2–5 minutes per run, with frequencies from 1–5 times per week.
- 4 neutral: negligible change.
- 3 losses: revert; cost 10 minutes each upfront.
We tally:
- Wins: Suppose an average of 3 minutes saved per run, average frequency 3 times per week. That’s 5 × 3 × 3 = 45 minutes saved per week.
- Losses: 3 × 10 = 30 minutes cost once, so amortized over 4 weeks ≈ 7.5 minutes per week.
- Net weekly gain ≈ 45 − 7.5 = 37.5 minutes.
- Over 4 weeks: ≈ 150 minutes (~2.5 hours).
This is conservative. If one win hits a daily routine (e.g., improved inbox triage saves 4 minutes per day), that adds 20 minutes per week alone. The math is not heroic; it’s comfortable.
Using Brali LifeOS to hold the practice
We keep it simple:
- A daily task: “One Step Better (9 minutes).”
- A micro‑checklist in the task: Pick target → State constraint → Run test → Log metric → Debrief (2 sentences).
- A Journal view: one entry per micro‑test with date, target, change, metric, result.
- A weekly task: “Keep, Improve, Drop (20–30 minutes).”
- A lightweight “Library” list with 3 fields (Name, When, How).
This is not a sales line; it’s a coordination point. We could use paper. We choose Brali because it reduces context switching and templatizes the debrief.
Practicing under uncertainty: when we can’t see results yet
Some improvements play out over weeks. Example: switching to biweekly sprints or adjusting meeting cadence. We still apply the micro‑test lens: we pick a tiny proxy and a short horizon.
- Proxy: number of carry‑over tasks per sprint (baseline: 9; aim: under 5).
- Horizon: 2 sprints (4 weeks).
- Micro‑tests inside: a 10‑minute change to definition of done, a 9‑minute change to daily standup format. We log immediate metrics (duration, clarity scores) plus the proxy over time.
We remind ourselves: macro changes are built from micro proof points.
How this ties to personal energy
We talk about minutes and clicks, but the real gains show up in the body. When a process clarifies, the chest feels less tight. When a handoff lands clean, we stop rehearsing in our head all afternoon. The creative habit is not abstract; it reduces background noise. This itself is a metric we can track once a week: “friction rating” on a 1–5 scale for one routine we care about. It’s subjective but actionable. We’re allowed to choose relief as a datapoint.
A small gallery of “before → after” with numbers
- Before: Weekly report at 23 minutes, 0 explicit asks → After: 11 minutes, 1 ask, replies within 6 hours (from 24). Net: −12 minutes, +clarity.
- Before: 15 desktop screenshots at EOD → After: 0–3, with a 2‑minute daily sweep. Net: −12 visual clutter items per day, −cognitive drag.
- Before: Standup 15 minutes, 8 people, meandering → After: 9 minutes, round robin capped at 45 seconds each, parking lot for blockers. Net: −6 minutes, +focus. One test set by timer and visible agenda.
- Before: Slow code review cycle, PRs lack context → After: PR template with 3 lines: “What changed, why, how to test.” Net: QA back‑and‑forth messages: −60% (from 10 to 4 per PR).
- Before: Dinner cleanup drifts to 45 minutes → After: “12‑minute reset” timer with 2 people, zones assigned. Net: −20 minutes average, fewer next‑morning annoyances.
These are not dramatic innovations; they are the scaffolding that lets larger work breathe.
What to do today (concrete)
- Choose one routine we ran at least once this week that annoyed us (1 minute).
- Set a 9‑minute timer.
- Apply one constraint lens (word cap, time cap, template, timing shift, default change, DO‑D line).
- Run the change once. Measure one number (minutes, counts, replies).
- Debrief in two sentences in Brali. Decide: keep for a week or revert. Schedule a 15‑minute review for Friday.
We can stop here. If we want a stretch, we add a second 9‑minute slot tomorrow with a different routine.
Sample Day Tally (target: 1 micro‑review)
- Routine: Weekly status email
- Change: 3‑line template (Objective, Progress, Blockers/Ask) with 120‑word cap
- Time spent running test: 9 minutes
- Metric 1: Minutes to compose (down from 23 → 11)
- Metric 2: Replies within 6 hours (up from 1/5 → 3/5)
- Total daily improvement time: 9 minutes
- Net minutes saved today: ~12 minutes
- Emotional note: Felt lighter; less dread when composing
Totals today:
- Tests: 1
- Minutes invested: 9
- Minutes reclaimed: 12
- Signal: Keep for 2 more Mondays, then standardize
Risks and boundaries
- Over‑optimization: We can spend more time optimizing than doing. Remedy: weekly cap of 60 minutes on improvement tasks.
- Hidden dependencies: Some steps exist for reasons we don’t see. Remedy: if touching shared artifacts, ask, “What breaks if we remove this? Who depends on it?”
- Fatigue: The habit can feel like another pressure. Remedy: we allow “no test” days, and we keep the busy‑day alternative (shortcut binding) to preserve continuity.
- Metric myopia: We can chase easy numbers and ignore quality. Remedy: pair a hard metric (minutes) with a soft one (clarity/quality rating 1–5) once a week.
We keep the habit humane.
A short detour into identity
We call this category “Be Creative,” and we mean something steady: “We are people who ask useful questions and run small tests.” It’s less glamorous than it sounds, and more persistent than brainstorming. Creativity here is not a mood; it’s a posture and a loop.
At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. This hack is one of those tools.
Implementation notes we learned the hard way
- Place the micro‑review after a natural break. Right after lunch works; right before commute sometimes does not.
- Write the debrief first if you’re stuck: “I want X. I will try Y. I expect Z.” Then go do Y.
- Don’t fight tools you dislike during the test. If your company lives in Google Docs, run the test there. The change is the constraint, not the platform.
- Label experiments with version numbers: v0.1, v0.2. It signals permission to be wrong.
- Share short wins with exact numbers once a week, not daily. It preserves attention.
We assumed we should teach the framework first and then ask people to try it → we observed that people tried it more when we gave them a single test to run that day → we changed to this “do one now” format. Teaching follows action.
Integrating check‑ins
We will use Brali LifeOS to scaffold this.
Check‑in cadence:
- Daily: 3 questions (30–60 seconds)
- Weekly: 3 questions (3–5 minutes)
- Metrics: 1–2 numbers to log
We place the check‑ins inside the daily micro‑task and the weekly review so they blend with action. We’re not collecting data for data’s sake; the numbers are levers for decisions.
What failure looks like and why it’s okay
We will try a change and make things worse. Example: we compress a meeting and miss an important discussion. We log it: “Standup cutoff at 8 minutes caused rush; 2 people felt unheard. Revert to 12 minutes with a 2‑minute buffer.” The cost is the 9 minutes we spent testing plus one awkward moment. The benefit is clarity about the boundary of efficiency. We learn the minimum viable meeting time for this team is 12 minutes, not 8. That is useful knowledge.
A 30–40% “failure” rate in micro‑tests feels high on paper. It is the sign of honest exploration. We keep it bounded, and we keep going.
A final practice scene
We are about to shut the laptop. We open Brali LifeOS. We tap “One Step Better.” We scroll through three possible targets from today. We pick the one that made us swear under our breath: searching for the right Zoom link. We create a single rule: “Meeting titles must contain the Zoom link in the location field; agenda in the description.” We edit the next three invites. We set a timer for 9 minutes; we finish in 6. We note the metric for tomorrow: “Number of times I click the wrong link.” Today’s baseline: 3. Tomorrow’s target: 0. We close the laptop. We feel a small but real sense of alignment. The world moved a millimeter in our favor.
Check‑in Block
Daily (30–60 seconds)
- What routine did I question today? Name it in 3–5 words.
- What single change did I test? One sentence.
- What was the immediate result? Log one number (minutes, count, or %), plus a 1–5 clarity/friction rating.
Weekly (3–5 minutes)
- Which one change will I keep and standardize next week? Why?
- Which change needs one more iteration? What constraint will I try?
- What will I drop or revert? What did I learn?
Metrics to log
- Count: number of micro‑tests run this week (target: 3)
- Minutes: time saved on the best routine this week (estimate acceptable; target: ≥10 minutes)
Closing the loop today
- Pick one routine you ran today that bugged you.
- Set a 9‑minute timer.
- Change one constraint and run it once.
- Log one number and two sentences in Brali.
- Schedule a 15‑minute Friday review to keep, improve, or drop.
If we do this once, we prove we can do it. If we do it thrice in a week, we become the kind of people who quietly make work easier.

How to Regularly Review How Things Are Typically Done and Question If There's a Better Way (Be Creative)
- micro‑tests per week (count), minutes saved on best routine (minutes)
Read more Life OS
How to Try Imposing a 30-Minute Limit on a Task That Would Usually Take an Hour (Be Creative)
Try imposing a 30-minute limit on a task that would usually take an hour.
How to Link Different Concepts, Like Biology and Business, to Discover Innovative Solutions (Be Creative)
Link different concepts, like biology and business, to discover innovative solutions.
How to Look to Nature for Solutions, Like How Bees Build Hives or Trees Communicate (Be Creative)
Look to nature for solutions, like how bees build hives or trees communicate.
How to Start Brainstorming with a Bold or Unusual Idea to Inspire Creative Thinking (Be Creative)
Start brainstorming with a bold or unusual idea to inspire creative thinking.
About the Brali Life OS Authors
MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.
Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.
Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.