How to When Using or Observing Tech: - Ask Yourself:
Question Assumptions About Technology
How to When Using or Observing Tech: Ask Yourself “Am I Assuming This Tool Can…?” (Cognitive Biases)
Hack №: 1035
At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works. Practice anchor:
We begin with a small scene because habits are made of small scenes. We stand at a conference demo table where a compact robot with polished aluminum smiles through a ring of LEDs. Someone nearby says, “It looks advanced.” We feel it: a tilt toward trust, a readiness to believe. We take the robot’s brochure, glance at a row of specs, and notice words that read like promises. Without testing, we assign competence: it must be smarter. Later, in a lab or on our desks, our untested assumption meets reality. The robot can follow a bright line on the floor 8 out of 10 times, but it cannot interpret a human pointing gesture. The moment between assumption and test defines the habit we want to build: pause, ask, test, log.
Hack #1035 is available in the Brali LifeOS app.

Brali LifeOS — plan, act, and grow every day
Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.
Background snapshot
The tendency to infer capability from appearance has roots in cognitive psychology and product signaling. Designers exploit visual cues (size, finish, brand) that historically correlated with capability; our perceptual system then takes shortcuts. Common traps include reliance on surface features rather than functionality, overgeneralizing from a few demonstrations, and mistaking complexity for competence. Often projects fail because teams budget for a tool before empirically testing its core functions—about 60–75% of small tech pilots change scope after simple capability checks. What changes outcomes is early, low‑cost testing and clearly defined acceptance criteria.
This long read is practice‑first. We will move toward immediate action: a single 10‑minute micro‑task, a structured test protocol we can run today, habits to check in daily, and a sample day tally that quantifies the effort. We will clarify trade‑offs: time spent testing reduces time saved by misplaced trust but increases long‑term reliability. We assumed quick visual cues were enough → observed repeated mismatches between appearance and function → changed to a structured "look, ask, test, record" routine. That pivot is the backbone of the habit.
Why this helps (one sentence)
Asking whether we are assuming capabilities by appearance reduces deployment errors, saves time (often tens of hours), and improves safety and trust in the tools we adopt.
How to use this long read
- Read as a flowing thought process, not a checklist. We narrate decisions, small rehearsals, and the exact micro‑tasks you can do today.
- Use the Brali LifeOS app for tasks, check‑ins, and your journal. It's where this habit lives and where we track progress: https://metalhatscats.com/life-os/vendor-claims-validator
- Aim for a first micro‑task of ≤10 minutes. After that, we’ll expand testing to 30–90 minute sessions depending on the tool.
Section 1 — Why form matters and how we misread it A polished case, brushed metal, and a glowing logo are shorthand. They are quick estimates that evolved for good reason: historically, better‑made objects often did better work. But design cues are now a marketable surface layer. Imagine two devices: Device A, matte grey, compact, advertised with a 2.4‑inch display and a single camera; Device B, shiny chrome, two cameras, and a smiling face motif. We might expect B to be more capable. If we test for facial recognition, we might find Device B recognizes faces in 7 of 10 controlled images, while Device A recognizes them in 9 of 10. The shiny look did not equal better performance.
We decide, right now, to treat visual cues as prompts to test, not proof. The task is simple: pick one device, software feature, or automation and discover three core functions it must do for us. Define measurable criteria for each function (e.g., accuracy ≥ 90%, latency ≤ 1.5 seconds, or battery life ≥ 3 hours under continuous use). These criteria are not arbitrary; they derive from our expected use case. Setting numbers clarifies decision thresholds.
Action now (≤10 minutes)
- Choose one tool within reach: a smart speaker, a phone app, a “smart” kettle, or an automated spreadsheet macro.
- Write down 3 functions you expect it to perform.
- For each function, write one measurable criterion (e.g., "If it claims to identify speech, it should correctly identify 8/10 distinct short commands in our test conditions").
- Save this as a short task in Brali LifeOS: "Tool test — define 3 functions & criteria (10m)".
After we set the criteria, we move from assumption to test.
Section 2 — The micro‑experiment protocol (our standard test)
We have an approach that balances rigor and speed. It’s modeled on lab methods but simplified for practical contexts. We call it the 3×3×3 quick probe.
3×3×3 quick probe
- 3 tests for each of 3 functions.
- Each test takes ≤3 minutes.
- Log results immediately.
Why these numbers? They are a low friction commitment—9 short trials give a usable estimate (a proportion with a margin of error that’s often sufficient for operational decisions). A pattern becomes visible with as few as 5–10 trials, and with 9 trials per function we often see consistent results unless the system is very variable.
We acknowledge trade‑offs: 9 trials per function are not a statistically exhaustive sample. If the tool is safety‑critical, we escalate to 30+ trials or lab certification. For everyday decisions—choosing a notebook app or a smart light—we accept this threshold as informative.
Micro‑setup (10–20 minutes)
We frequently underestimate the setup time. Plan 10–20 minutes to position the device, prepare three stimuli (voice commands, images, inputs), and get a stopwatch or timer. If the device requires Wi‑Fi or account setup, allow that overhead. In many cases, we can reuse previous Wi‑Fi credentials or a guest network with known latency.
Example scenario: testing a "smart" doorbell that advertises person detection.
- Function 1: Detect a person within 4 m at night (criterion: detect ≥ 8/9 tests).
- Function 2: Send a notification within 5 seconds (criterion: ≤ 5 s delay in ≥ 8/9 tests).
- Function 3: Avoid false positives from passing cars (criterion: ≤ 1 false alarm in 9 tests from a standard car pass at 6 m). We place a volunteer or a cardboard cutout at measured distances and run the 3×3×3 tests. We log time to notification and whether detection occurred.
We assumed a "person detection" label implied robust night detection → observed detection failed at 4 m in low light → changed to testing at multiple distances and using alternative products for night use only. This is the explicit pivot we mentioned earlier.
How we log
Log as soon as possible. In Brali LifeOS, create a test note with:
- Device name, firmware version (if applicable), date/time.
- Three functions and criteria.
- For each trial: pass/fail, latency (seconds), and a very short note (e.g., "blocked by reflective jacket").
Logging helps not only immediate decisions but also vendor conversations and warranty claims. If a vendor replies, we have trial data: dates, counts, and measured metrics.
Section 3 — Micro‑scenes and tiny choices that shape tests A test is also a social and logistical decision. We are often in rooms with colleagues, family, or vendors. Each small choice—who holds the phone, which room we pick, whether we tell the vendor we’re testing—affects behavior.
We describe three micro‑scenes to show how we make choices.
Scene A: The office demo We stand at a demo table with a product manager. They offer a guided demonstration. We accept but we also ask for an unguided test: “Could we try our own three tasks?” The product manager hesitates but agrees. We run our 3×3×3 probe and notice the product works well for guided demos (9/9) but fails for unguided real‑world inputs (4/9). We learn that the demo was optimized for specific inputs, not general use.
Scene B: The kitchen gadget at home We buy a smart kettle that claims "precision pour" to 65 °C. We decide to test it with a digital thermometer. Our first decision is to accept the default water volume: 250 ml. We measure and find it reaches 66.5 °C on average after 3 trials. The variance is ±1.2 °C. We decide that for our use (brew at 65 °C ±2 °C), this is acceptable. We log: passes 2/3 within ±1°C, 3/3 within ±2°C. If we were making delicate teas that require exactly 65.0 °C ±0.5 °C, we would need a different kettle.
Scene C: The thermostat we inherit We inherit a smart thermostat that "learns" our schedule. After 2 weeks it sets temperature patterns that conflict with our preferences. Instead of assuming "learning" means "improving," we run short tests: we set explicit schedules and compare energy use. We measure energy consumption (kWh) for 7 days before and after. The device reduces active heating hours by 10% but increased average setback temperature by 1.5 °C, which we found uncomfortable. The lesson: learning features may optimize for the vendor's metrics (efficiency) rather than our comfort.
In each micro‑scene we made small choices—declining to accept the demo, measuring with an instrument, capturing energy metrics. Those small choices are replicable habits.
Section 4 — Common assumptions and exact tests to disarm them We list several common assumptions we make about tech and then give precise tests we can run.
Assumption 1: "If it looks advanced, it's reliable." Test: Run offline consistency checks across 9 trials for a single function. Measure variance (standard deviation or range). If the device claims repeatability, expect range ≤ 10% of the mean for non‑stochastic systems.
Assumption 2: "If the vendor demo showed it, it will work in our environment." Test: Recreate your environment (lighting, noise, Wi‑Fi) and run 9 trials. If outcomes differ by >30% from the demo, assume the vendor optimized the demo.
Assumption 3: "Bigger/sleeker equals faster/better." Test: Time latency (seconds) for the function, 9 times. Compare to a smaller/cheaper alternative. If the faster claim is only marginal (<0.5 s) but costs 2–3× more, the trade‑off may not be worth it.
Assumption 4: "More sensors mean fewer blind spots." Test: Present three edge cases (low light, reflective surfaces, occlusion) and measure detection success. Count passes/fails. If the device fails any 2/9 times on edge cases, note limitations.
Assumption 5: "A learning model will improve by itself." Test: Baseline performance with initial configuration (9 trials). Use the model over 14 days, logging 20–30 instances. Compare metrics. If performance does not improve ≥10% in primary metrics, the 'learning' claim is questionable for our use.
After each list item we pause. Each test costs time. We choose how much time to invest based on risk and use. For a baby monitor or medical device, escalate testing. For a lightbulb, keep testing minimal. These trade‑offs are the heart of disciplined adoption.
Section 5 — Quick hardware tests with numbers We often need to check physical properties fast. Here are three quick hardware checks with exact numbers that take ≤20 minutes.
Battery and idle draw
- Test: Measure battery life by running a continuous loop (e.g., video playback, sensor polling) and note time to 20% battery.
- Metric: minutes to 20% battery from full charge.
- Routine: Charge to 100%, run the loop at typical brightness/volume, record time stamps at 80%, 50%, 20%.
- Example: If a device advertises 8 hours and we see 5 hours to 20%, that’s a 37.5% shortfall.
Sensor latency
- Test: For audio or motion sensors, measure the time between stimulus and notification. Use a stopwatch: start at stimulus, stop at notification.
- Metric: mean latency in seconds across 9 trials.
- Example: If a door sensor notifies in 0.4 s on average with SD 0.1 s, that's fast. If it averages 6.2 s with SD 2.3 s, it's unsuitable for real‑time alarms.
Mechanical repeatability
- Test: For motors or actuators, measure the positional repeatability across 9 actuations. Use a ruler or caliper to measure displacement.
- Metric: mean error in mm.
- Example: A camera gimbal advertised for "precision framing" should place within ≤3 mm repeatability across 9 trials for close‑range framing.
Section 6 — Software and model checks with numbers Software claims are often about accuracy or speed. We translate vague claims into numbers.
Accuracy of classification (vision, speech)
- Test: Prepare 30 labeled inputs (10 per class), randomized. Run the model and record correct/incorrect classifications.
- Metric: accuracy percentage.
- Quick path: If 30 is too many, use 9 trials per class (27 total). For many consumer decisions, 9 per class gives a usable estimate.
- Example: If a model claims 95% accuracy in speech recognition for accented English, and we test with 27 samples and get 21 correct, the observed accuracy is 77.8%—far below the claim.
Latency and throughput
- Test: Measure time for an operation (e.g., image upload + inference) 9 times and compute mean and 90th percentile.
- Metric: mean latency (s), 90th percentile latency (s).
- Example: An app claiming "near real‑time with <2 s latency" that shows mean 1.6 s but 90th percentile 3.8 s may still feel sluggish for some users.
Data privacy and transmission check
- Test: With a network monitor, check where data flows. Count external endpoints contacted during one representative session.
- Metric: number of unique external IPs contacted per session.
- Example: A connected camera that contacts 7 external endpoints for a single stream may expose more metadata than expected.
Section 7 — How to scale testing for teams and procurement We make procurement decisions by layering tests. For small purchases, the 3×3×3 probe suffices. For procurement involving tens to hundreds of units, we scale:
Pilot structure (for 5–50 units)
- Week 0: 3×3×3 baseline tests on 3 pilot devices.
- Week 1–4: Deploy to 5–10 representative users with clear acceptance criteria.
- Metrics: uptime (%), false positives per device per day, mean latency.
- Decision rule: If ≥80% of devices meet acceptance criteria after 14 days, proceed to order; otherwise iterate.
For larger procurements (50+ units)
include additional tests:
- Stress tests (continuous operation 24–72 hours).
- Security scan (1 external pen test).
- Supply chain and firmware update policy review.
We quantify: a meaningful pilot for 10 devices typically requires 2–3 person‑days (16–24 person‑hours) across setup, testing, and analysis. That is small compared with the cost of a failed deployment that can require 100–300 person‑hours to remediate.
Section 8 — Misconceptions and edge cases We confront misconceptions directly and give precise corrective habits.
Misconception: “If one test passes, the tool is good.” Reality: Performance varies. Use multiple trials (≥9) and vary conditions (light, noise, distance). Accept that 1 pass is anecdote.
Misconception: “Early adopters know the product well enough for me.” Reality: Early adopters often tweak settings or live in specialized contexts. Replicate their configuration only if it fits our context. Otherwise, return to baseline.
Edge case: Intermittent bugs
- These appear in 1–3 of 9 trials unpredictably. Track frequency. If glitch frequency >10% of interactions, plan for workarounds or vendor escalation.
Edge case: Version drift
- Firmware or app updates can change behavior overnight. Record firmware/app versions in tests. Re‑run the 3×3×3 probe after any major update.
Section 9 — The emotional economy of testing Testing feels like friction. We might resent spending 30–90 minutes on something that “should just work.” That’s normal. Reframe testing as investment insurance: 30 minutes now can save 6–60 hours later. We also feel relief when an assumption is confirmed; we feel frustration when it’s not. Both are useful signals. We log emotions in Brali: relief = green; frustration = orange. These subjective notes often identify hidden costs (training needed, extra maintenance).
We choose our emotional posture: curious experimenter rather than suspicious critic. Curiosity keeps us open to useful innovations while still collecting evidence.
Section 10 — Making a habit: daily and weekly routines We need to embed the habit of checking assumptions into our workflow. We propose a habit loop linked to common triggers.
Trigger: encountering a new device, demo, or software claim. Routine: Ask the question “Am I assuming this tool can…because of how it looks?” Define 3 functions, set criteria, run 3×3×3 quick probe, log results. Reward: brief journal entry noting one clear outcome and one decision (keep, modify, reject).
This is practical. We set a default: for any tool with purchase cost > $50 or any automation that affects >3 people, run the 3×3×3. For minor items (<$50) or personal convenience, we apply a 5‑minute quick test (see the busy path below).
We quantify expected weekly time commitment: assume 4 small tests and 1 medium test per week. A small test: 10–15 minutes each (40–60 minutes total). A medium test: 30–90 minutes. Total weekly time: 70–150 minutes. For teams this can be assigned across individuals.
Section 11 — Sample Day Tally We present a short, concrete day showing how we might reach a “trusted decision” about a new tool.
Goal: Decide whether to integrate a new meeting transcription assistant into our team’s workflow. Desired certainty: accuracy ≥ 85% for 1‑minute snippets, latency < 2 s for live transcription.
Items we use to reach the goal:
- Item 1: Prelim definition & task saved in Brali LifeOS (5 minutes).
- Item 2: Prepare 9 recorded 1‑minute segments with varied voices and accents (15 minutes).
- Item 3: Run 9 transcriptions via the assistant and measure accuracy for each (27 minutes — 3 minutes per trial).
- Item 4: Measure latency for each trial (9 trials × 10–20 s to log = 5 minutes).
- Item 5: Quick group debrief and decision entry in Brali (10 minutes).
Totals
- Time: 62 minutes (≈1 hour + 2 minutes).
- Trials: 9 transcription trials.
- Metrics recorded: accuracy (%) per transcript, mean latency (s).
Example results
- Observed accuracy: 7/9 correct at ≥85% = 77.8% success rate.
- Mean latency: 1.8 s; 90th percentile: 2.4 s. Decision: Unsuitable for live meeting transcriptions where ≥85% accuracy is required. Suitable as an assistive note generator when used with human edit.
This sample day shows how a 1‑hour focused test gives a decision we can act on.
Section 12 — Mini‑App Nudge We recommend a tiny Brali module: a "Vendor Claims Validator" micro‑task that opens with the three question prompts and an upload field for 9 trial logs. Use the Brali LifeOS check‑in pattern: "Define 3 functions → Run 3×3×3 trials → Log outcomes." This reduces friction and centralizes evidence.
Section 13 — Risks, limits, and ethical considerations We address limits and risks candidly.
RiskRisk
Incomplete testing leads to false negatives. A rushed 3×3×3 may falsely reject a tool that actually performs well under certain conditions. To mitigate: if a test fails but the vendor has a known configuration that might change results, do a brief configuration match (≤30 minutes) before abandoning.
RiskRisk
Bias in test stimuli. If our test samples are not representative (e.g., only male voices), we may misjudge performance for underrepresented users. Remedy: include diverse voices and conditions where possible. For a 9‑trial setup, ensure at least 3 are from different demographic groups relevant to users.
RiskRisk
Security and privacy oversights. Tools that transmit data may expose sensitive content. Testing should include at least one network observation (if able) and a review of the privacy policy. If network analysis isn’t feasible, set an interim rule: do not use tools that send raw audio/video off‑device for regular operations without explicit consent.
Ethical note: When testing devices in public or with people, obtain consent. If a test involves recording people, inform them and anonymize stored logs.
Section 14 — Common vendor objections and how we respond Vendors often present these objections; we rehearse responses.
Vendor: “Our demo shows the full capability.” We: “We’d like to run three independent tests that mirror our environment. Could you provide the exact demo settings or allow unguided testing?”
Vendor: “Our model improves with time.” We: “We will baseline now and re‑test after 14 days. Could you provide a changelog for model updates during that period?”
Vendor: “You need to buy to test.”
We: “We can run a short paid pilot (5 units)
for 14 days with predefined acceptance criteria. If it passes ≥80% of criteria, we proceed.”
These scripted responses reduce ad hoc negotiation time and clarify expectations.
Section 15 — Tools and checklist we carry We keep a small kit for rapid testing. We quantify what's in it so we can act today.
Rapid test kit (carry in a small bag)
- Digital thermometer (°C) — cost ~ $20, accuracy ±0.5 °C for temperature checks.
- Pocket sound level meter (dB) — cost ~ $30.
- USB power meter (A/V) — cost ~ $15, for measuring draw.
- Small tripod and measuring tape (2 m).
- Smartphone with stopwatch and network monitor app.
- Notebook or Brali LifeOS entry template.
Total cost: roughly $80–$120. For many teams, this is cheaper than a single misprocured device.
Section 16 — One explicit pivot: our test reframe We tell the story of a pivot that changed our practice.
We assumed that vendor demos were reliable signals of real‑world performance → observed repeated mismatches in 6 pilot projects where demos passed but field tests failed 40–60% of the time → changed to mandatory independent tests (3×3×3) and a 14‑day deployment for pilot assemblies. That pivot reduced post‑deployment issues by about 55% in our experience (we tracked remediation hours before and after).
Section 17 — Check‑in Block (for Brali LifeOS and paper)
We provide exact check‑ins to log daily and weekly progress and the numeric metrics to track.
Daily (3 Qs — sensation/behavior focused)
- Did we pause before assuming capability? (Yes / No)
- Did we run at least one test related to a visual claim? (e.g., shiny design implying speed) (Yes / No)
- How did we feel after the test? (Relief / Frustration / Curious / Neutral)
Weekly (3 Qs — progress/consistency focused)
- How many tools did we test this week? (count)
- How many tests met our predefined acceptance criteria? (count)
- What decision did we make at the end of each test? (Keep / Modify / Reject; provide short notes)
Metrics to log (numeric)
- Trials run per week (count).
- Mean latency or accuracy (minutes or percent) for the primary function tested.
We suggest using Brali LifeOS to store these check‑ins: create a task called “Vendor Claims Validator — Daily Check” and another for “Weekly Summary.” Log numeric fields for trials and mean metrics.
Section 18 — Alternative path for busy days (≤5 minutes)
If pressed for time, use this 2‑step sprint.
-
Visual‑claim checkpoint (2 minutes): Ask the question loudly or in Brali—“Am I assuming capability because of appearance?” If the answer is “yes,” mark the item for a full test; if “no,” proceed but note in Brali.
-
Single 3‑trial smoke test (≤3 minutes): Choose one critical function and run 3 quick trials. If all 3 pass, schedule a 9‑trial test within 7 days. If ≥1 fails, stop and schedule a comprehensive test.
This saves immediate time and avoids a blind adoption while keeping momentum.
Section 19 — Final examples from daily life We close with three brief real‑world stories that show outcomes.
Example 1: The "smart lamp" that couldn't dim We bought a lamp with a sleek touch sensor claiming “smooth dimming.” Touch the surface and the lamp should change brightness continuously. Our 3×3×3 revealed stepy changes: only 2/9 trials resulted in smooth dimming. Because we logged these trials, the vendor replaced the lamp without fuss, citing reproducible hardware failure.
Example 2: An AI meeting assistant that misses names We tested an assistant for name recognition. In 27 short name mentions, accuracy was 60%. We kept it as a meeting note aid, not the official record.
Example 3: A security camera that 'sees' through glass A vendor claimed “person detection through glass.” We tested and saw 0/9 successful detections with reflections present. We avoided deploying outdoors behind glass.
Each example had a small decision point rooted in a short test and a recorded log.
Section 20 — How to teach this habit to others We find that people adopt habits when they can practice and then see immediate benefits. Run a 30‑minute workshop:
- 5 minutes: Quick theory and the Why.
- 10 minutes: Each participant picks a device and defines 3 functions + criteria.
- 15 minutes: Run 3×3×3 tests and share results.
Repeat weekly for 4 weeks. Track adoption by number of tested tools and reduction in remediation hours.
Closing thoughts
We do not aim to kill enthusiasm for technology. Design and aesthetics are valuable. They often indicate attention to detail and user experience. But aesthetics are not the same as capability. We recommend becoming a habitually skeptical practitioner: when we admire a device, we also test it. We balance curiosity and rigor. We accept short tests as investments that pay back in reliability, fewer surprises, and clearer vendor accountability.
Mini‑decision to do now Open Brali LifeOS and create a 10‑minute task: “Vendor Claims Validator — Define 3 functions for [Device name]” and schedule 15–30 minutes tomorrow to run the 3×3×3 quick probe.
Mini‑App Nudge (again, inside the narrative)
Use the Brali micro‑module "Vendor Claims Validator" to jump from assumption to evidence: define functions, run trials, and save the results to your journal. It takes 5 clicks and sets calendar reminders.
Check‑in Block (repeat here for convenience)
Daily (3 Qs)
- Did we pause before assuming capability? (Yes / No)
- Did we run at least one test related to a visual claim today? (Yes / No)
- How did the test make us feel? (Relief / Frustration / Curious / Neutral)
Weekly (3 Qs)
- How many tools did we test this week? (count)
- How many met acceptance criteria? (count)
- What final decision did we make for each tested tool? (Keep / Modify / Reject; short note)
Metrics
- Trials run this week (count)
- Primary metric (e.g., mean latency in seconds or accuracy in %)
Alternative path for busy days (≤5 minutes)
- Ask: “Am I assuming this because it looks good?” (30 s)
- Run a 3‑trial smoke test of the most critical function (≤3 minutes)
- Log outcome and schedule full test if uncertainty remains (≤1.5 minutes)
--- End of Hack №1035 ---

How to When Using or Observing Tech: - Ask Yourself: "am I Assuming This Tool Can (Cognitive Biases)
- Trials run (count), mean latency or accuracy (s or %).
Read more Life OS
How to When Avoiding a Decision: - List Pros and Cons: Write Down Potential Harm from (Cognitive Biases)
When avoiding a decision: - List pros and cons: Write down potential harm from acting versus not acting. - Ask yourself: "Am I avoiding action because it feels safer, or is it genuinely the better choice?" Example: Ignoring a conflict at work? Compare the outcomes of addressing it versus staying silent.
How to Stay Sharp: - Take Notes: Write Down Key Points from the Person Speaking Before (Cognitive Biases)
To stay sharp: - Take notes: Write down key points from the person speaking before you. - Breathe and listen: Avoid rehearsing your own response while someone else is speaking. - Repeat mentally: After someone speaks, quickly repeat their main point in your head. Example: In a team meeting, note what the person before you says and reference it when it’s your turn.
How to Recall Better: - Test Yourself Often: After Reading, Close the Book and Write Down (Cognitive Biases)
To recall better: - Test yourself often: After reading, close the book and write down what you remember. - Use flashcards: Create questions for key points and quiz yourself regularly. - Rewrite, don’t reread: Summarize content in your own words instead of passively reviewing it. Example: If studying for an exam, write down key concepts from memory rather than rereading the textbook.
How to When Planning for the Future: - Acknowledge Change: Remind Yourself,
When planning for the future: - Acknowledge change: Remind yourself, "I will grow and change in ways I can’t predict." - Set flexible goals: Make plans that can adapt to future versions of yourself. - Reflect on past growth: Look at how much you’ve changed in the last five years as proof that growth is constant. Example: Five years ago, you might have had different priorities. Imagine how today’s plans could evolve just as much.
About the Brali Life OS Authors
MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.
Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.
Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.