How to Start by Creating Two Lists: 'known' and 'unknown (Future Builder)

Clarify and Explore (Known vs. Unknown)

Published By MetalHatsCats Team

How to Start by Creating Two Lists: “Known” and “Unknown (Future Builder)”

Hack №: 643 — MetalHatsCats × Brali LifeOS

At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.

We open with a small practice: a pen, a blank page, and two headings — KNOWN and UNKNOWN. The instruction is simple, but the movement of writing makes something happen: we stop letting all options float in the air and instead give some of them anchors. We test one idea today: by separating what we’re certain about from what we’re not, we can change decisions, spot hidden risks, and speed the next step. This is a thinking‑in‑public exercise that costs 5–30 minutes and returns usable decisions and clearer next actions.

Hack #643 is available in the Brali LifeOS app.

Brali LifeOS

Brali LifeOS — plan, act, and grow every day

Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.

Get it on Google PlayDownload on the App Store

Explore the Brali LifeOS app →

Background snapshot

  • Origins: This minimal separation comes from decision‑theory and military planning where commanders list “facts” versus “assumptions” before acting. Business strategy borrowed it as risk registers and project managers as assumptions logs.
  • Common traps: People mix hopes and evidence in one list, inflate certainty for comfort, or treat “unknowns” as permanent problems instead of discoverable questions.
  • Why it often fails: We stop after writing and do not convert unknowns into testable questions. Or we fear documenting unknowns because it feels like admitting weakness.
  • What changes outcomes: Turning each unknown into a specific, time‑bounded micro‑task (a test, an interview, a measurement) improves clarity. We see progress within 48–72 hours when at least 3 unknowns get tests.

We assume you want something practical today. So the first section is practice‑first: an immediate micro‑task and the simplest script to create your lists and extract three micro‑actions. Then we’ll widen the view: why this works cognitively, how to structure the unknowns, common misreadings, trade‑offs, and real examples we did, including the pivot — "We assumed X → observed Y → changed to Z." Everything moves toward action, and we end with Brali check‑ins you can paste or import.

Part 1 — The first 10 minutes: Do this now We make a small, precise commitment: spend 10 minutes to get three things done.

Materials: any notebook (A5 is handy), a phone, or Brali LifeOS open at the app link: https://metalhatscats.com/life-os/known-unknown-decision-lab.

Step 5

From UNKNOWN, pick the three highest‑leverage unknowns (the ones that would change your next month if resolved). For each, write one test or micro‑task that you can do within 48 hours. Make tests specific and time‑bounded: interviews, small experiments, measurements, or calls.

Finish the 10 minutes by entering these three micro‑tasks into Brali LifeOS as tasks and the three unknowns as journal prompts. Press start.

We often resist because the UNKNOWN column can feel judgmental; we soften that with curiosity. We write the unknowns not to shame but to guide where to look next.

Why the 10‑minute rule works We add a time cap because psychology and habit research show that short, bounded efforts reduce avoidance. Ten minutes is long enough to reveal meaningful patterns and short enough to actually start. If we spend more time, we risk polishing the list into procrastination. If less, we risk vagueness.

An immediate example (micro‑scene)
We sit at a kitchen table with a cold mug and the laptop open to Brali. One of us writes under KNOWN: “I charge $80/hr, last invoice $1,600.” Under UNKNOWN: “How sensitive are my clients to price?” We make the test: “Email 3 clients this week with two price options and ask preference; measure reply rate in 72 hours.” We log the task in Brali for next Tuesday at 10:00. The act of writing made the next action specific: write the email cold draft and schedule a send; 20 minutes. That small sequence reduces the fog.

Part 2 — Structuring "Unknown" into useful categories Unknowns are not all the same. We find it useful to convert them into four types, and each type suggests a test method. Make the classification before you design a test.

Types of unknowns

  • Outcome unknowns: Will X happen? Example: “Will the product sell 200 units in month 1?”
    • Test method: small quantitative pilot; preorders; crowdfunding; landing page with signup.
  • Process unknowns: How will X be done? Example: “Can we set up fulfillment in 72 hours?”
    • Test method: prototype the process; run a dry run; checklist with time stamps.
  • Preference unknowns: What do people want? Example: “Do customers prefer A or B?”
    • Test method: survey of N=30, A/B test, or quick interviews (5–10 min).
  • System unknowns (risks, interactions): What hidden risk will break us? Example: “Is there a supplier shortage in October?”
    • Test method: supplier calls, backup sourcing, or scenario mapping.

We found that when teams misclassify an unknown (treat a preference as an outcome), they design ineffective tests. We assumed all unknowns could be solved with surveys → observed that reply rates were <10% and results were noisy → changed to running 20 quick interviews and a small pricing A/B landing page that produced a 12% conversion signal. The pivot matters: tests must match the unknown.

How to write a tight unknown

A tight unknown is a question phrased so a single result will tell you what to do next. Use this formula: “Will [measurable action] reach [numerical threshold] by [time frame]?” Examples:

  • Loose: “Will customers like this?”
  • Tight: “Will 50 people sign up for the waitlist in 14 days with a $0 ad spend?”
  • Tight: “Will 3 of 5 pilot clients renew after a 60‑day trial?”

We usually pick thresholds that change decisions: 3 renewals makes the product viable for the next sprint; 50 signups justifies $X ad spend.

Part 3 — Turning unknowns into micro‑actions (practice again)
After we categorize unknowns, we convert them into 1–3 micro‑actions. Each action is small (5–90 minutes) and produces evidence.

Checklist for designing a micro‑action:

Step 4

Decision rule: what we will do if result is A, B, or C.

Example: Unknown — “Will client A renew in 3 months?”

  • Micro‑action 1 (10 min): Review email and invoice history; count past renewals (n).
  • Micro‑action 2 (20 min): Call client for a check‑in; ask two specific questions about value and renewal interest.
  • Measure: client responses (yes/no/needs changes).
  • Decision rule: If “yes” → propose renewal terms in 48 hrs; if “needs changes” → schedule a scope revision call; if “no” → start outreach to replace 1 client.

We design micro‑actions to produce choices, not just information. The small call creates immediate commitment.

A practical script for a 30‑minute experiment

Step 5

Decide next steps in the next 10 minutes: schedule the follow‑up, record a decision in Brali, or plan a second micro‑test.

Part 4 — Personal examples: three real cases we tried We describe three micro‑scenes where we applied the method to show the trade‑offs and outcomes. In each, we list the KNOWN, the UNKNOWN, the test, the result, and the decision.

Case A — Freelance pricing decision (time spent: 4 hours total across tests)
KNOWN

  • We charge $80/hr. Last month revenue: $6,400 from 4 clients.
  • Average project length: 22 hours. UNKNOWN
  • If we raise price 12% to $90/hr, will at least 3 of 4 clients accept within a month? Test and actions
  • Action 1 (10 min): Create email explaining a limited price change and two options.
  • Action 2 (30 min): Send to 4 clients and track responses for 72 hours.
  • Action 3 (2 hours): Offer a one‑month extension at the old price to reduce churn. Result
  • Responses: 2 clients accepted the increase; 1 requested a phased increase; 1 required negotiation. Decision rule met: we chose a phased implementation (Z): increase for new contracts and phase existing clients over 90 days. Trade‑offs observed: immediate revenue gains vs. client churn risk. We accepted a slower path to keep retention.

Case B — Side project MVP (time spent: 3.5 hours)
KNOWN

  • We have a landing page with a mailing form and 0 paid traffic. UNKNOWN
  • Will a $50 ad spend on a targeted audience produce at least 20 signups in 7 days? Test and actions
  • Action 1 (15 min): Set up a landing page variant with clearer value and a 1‑minute explainer.
  • Action 2 (30 min): Create an ad campaign for $50 targeted at 10,000 people; run for 7 days. Result
  • Signups: 23 in 7 days → conversion rate 0.23% on 10,000 impressions; cost per signup ≈ $2.17. Decision: Proceed to a $200 ad test, because cost per signup was acceptable for our CAC expectation. Trade‑offs: Small ad tests can mislead if the creative or targeting is off; we planned two more creative variants.

Case C — Team process change (time spent: 6 hours)
KNOWN

  • Weekly stand‑ups took 45 minutes; attendance 80%. UNKNOWN
  • Would shifting to a 20‑minute asynchronous check‑in increase productivity without losing coordination? Test and actions
  • Action 1 (60 min): Draft an asynchronous check‑in template (3 fields: progress, blocker, top priority) and pilot for 1 week with 5 core members.
  • Action 2 (90 min): Run two asynchronous cycles and measure completion rate and blockers raised. Result
  • Completion rate: 92% with responses in under 12 hours; reported blockers resolved 40% faster. Decision: Move to asynchronous format for non‑urgent weeks; keep a 20‑minute sync twice monthly. Trade‑offs: Some spontaneous problem solving decreased; we scheduled ad‑hoc syncs for complex problems.

From these cases we notice a regular rhythm: test → measure a number → apply a decision rule. We govern tests to cost no more than a few hours.

Part 5 — Quantifying uncertainty: how many unknowns to handle We quantify load. Humans can manage a limited number of active unknowns before cognitive overload. We recommend a working set.

Sample Day Tally (how to reach a target by applying this hack)

We often use a “day tally” to show how small actions add to a measurable outcome. Suppose our target is to gather 50 pre‑signups for a beta in 7 days. Here’s a practical 1‑day plan that contributes to that target.

Goal: 50 signups in 7 days → daily target ≈ 7–8 signups.

Sample Day Tally (day 1 activities)

  • Send email to 30 warm leads with a direct ask and link — estimate 10% reply rate, 30 leads → 3 signups. (Time: 45 min)
  • Post in two relevant Slack communities / forums — estimate 5 signups. (Time: 20 min)
  • Run a $20 boosted social post targeted at 8,000 people — estimate 4 signups at $5/signup. (Time: 15 min)
  • Ask 5 acquaintances to forward the page — estimate 2 signups. (Time: 10 min) Total expected signups (day 1): 14 (surpass daily target to create margin) Total time: ~90 minutes Total ad spend: $20

We prefer overperforming a little on day 1 to accumulate social proof for later days. The numbers (counts, $ amounts, and minutes) keep the experiment grounded. If the actual count is 5 instead of 14, our decision rule is triggered: double down on one channel that produced those 5 or change the creative.

Mini‑App Nudge Open the Brali LifeOS module “Known‑Unknown Decision Lab” and set a daily 10‑minute check‑in for 7 days to capture one new data point per unknown. This keeps the habit simple and repeatable.

Part 6 — Designing decision rules (if A → do B)
A test without a decision rule is noise. We prefer binary or tiered decision rules tied to the number or percentage result. Here are templates.

Decision rule templates

  • Binary threshold: If ≥ X signups in 7 days → invest $Y; else → iterate creative.
  • Tiered: If ≥ X → scale; if X‑Y → re‑test; if < Y → stop.
  • Qualitative thresholds with numbers: If 3 of 5 interviews mention “price” as a barrier → redesign pricing.

We always list the next action in the rule: who will do it, by when, and how long it will take.

Part 7 — Common misconceptions and how to handle them Misconception 1: “Unknowns are bad; we should pretend to know.”
Reality: Unknowns documented are tools, not admissions of failure. Labeling unknowns reduces overconfidence bias and increases the speed of correction.

Misconception 2: “Surveys will answer everything.”
Reality: Surveys often produce low predictive value unless N≥30 and questions are tightly designed. For preference unknowns, do 10–20 interviews for depth and a small A/B test for scale.

Misconception 3: “All unknowns need long experiments.”
Reality: Many unknowns resolve with quick diagnostics — a 15‑minute call, a single landing page, or a supplier email.

Misconception 4: “Unknowns mean we’re risky.”
Reality: Unknowns are simply areas of uncertainty. They represent potential upside and downside; mapping them reduces exposure by turning them into manageable tests.

Edge cases and limits

  • High‑stakes unknowns: For legal, safety, medical, or compliance unknowns, we cannot rely on casual tests. These require expert consultation and formal risk assessments.
  • Large teams: With more people, unknowns can balloon. Use a shared register and assign an owner per unknown. Owners should run one micro‑task per week.
  • Resource limits: If you only have 10 minutes daily, pick one unknown and run 5‑minute micro‑tests: quick email, one targeted social post, or a brief phone call. Results will compound.

Part 8 — Risk management and known pitfalls Writing unknowns can make us anxious. That’s fine; anxiety is data. We keep risks manageable by three techniques:

Step 3

Quantify cost of being wrong: estimate the downside in dollars/hours to decide how much to invest in testing.

Example: A product launch unknown: “Will we face a supplier shortage?” Cost of being wrong: if supply fails, we lose $10,000 in orders and damage reputation. Test investment warranted: $500 to pre‑secure two suppliers and a sample order.

Part 9 — The social problem: how to use this with colleagues or clients When we share unknowns in team settings, two things happen: people either conflate unknowns with blame or they become energized to test. We prefer a short script for sharing.

Step 3

Propose one test per unknown and ask for volunteers for micro‑tasks.

We find that stating facts first reduces defensiveness. Invite ownership: “Who can pilot the first interview?” Make follow‑up explicit: set a 48‑hour check‑in.

Part 10 — Measurement: what numbers to track Keep metrics simple. Each unknown should map to one metric. Examples:

  • Count (number of signups, number of interviews, number of leads).
  • Minutes (time to fulfill an order, time to resolve a blocker).
  • Mg or standard unit (if applicable: mg of ingredient, or dollars).

We recommend logging no more than two metrics per week to minimize noise. For financial experiments, track both the count and dollars (e.g., signups and CPA).

Check this pattern: if you run a landing page ad test, log:

  • Metric 1: signups (count)
  • Metric 2: cost per signup ($)

Part 11 — A durable habit: weekly rehearsal We embed the practice into a weekly 30–60 minute ritual.

Step 4

Log tasks and check‑ins in Brali LifeOS (10 min).

This forces closure and ownership. We notice that when teams skip the weekly rehearsal, unknowns accumulate like unread mail. The rehearsal is the closure mechanism.

Part 12 — Quick alternatives for very busy days (≤5 minutes)
If we only have 5 minutes, use this micro‑script. Option A (5 minutes, solo)

  • Write 1 KNOWN and 1 UNKNOWN.
  • Design 1 micro‑task to answer that unknown within 48 hours (≤30 minutes).
  • Enter the micro‑task into Brali LifeOS and set a 48‑hour reminder.

Option B (3 minutes, urgent)

  • Write 1 KNOWN, 1 UNKNOWN, and decide “call this person for 5 minutes” or “send this one email.” Send it now.

These tiny actions maintain momentum. If we did nothing else, this keeps our working set alive.

Part 13 — A longer example: launching an online course (detailed scenario)
We walk through a longer scenario to make the flow concrete.

Scenario: We plan to launch an online course in 10 weeks. Our working hypothesis is that professionals in our niche will pay $199 for a 3‑hour workshop plus templates. We follow the known/unknown method.

Step 0 — Initial KNOWN (6 items)

Step 6

We can spend $400 on paid outreach.

Step 1 — Top UNKNOWNs (pick 5)

Step 2

We cap total time spent per unknown at 6 hours across all tests in the first iteration.

If we cannot produce a clear decision from the test results within the time cap, we escalate: either allocate more resources based on a cost/benefit analysis or accept uncertainty for now and proceed with a plan B.

Part 15 — Integrating into Brali LifeOS: practical steps Brali LifeOS is where tasks, check‑ins, and journals live. Use it to implement the hack.

Setup steps in Brali (5–20 minutes)

Step 6

Use the journal to capture evidence: quotes, screenshots, counts.

Why Brali helps: it links tasks to check‑ins and your journal. We notice higher completion rates when tasks and check‑ins live in one place.

Part 16 — Measurement examples to log in Brali We recommend logging the following depending on the unknown type:

Outcome unknowns

  • Metric: signups (count)
  • Secondary metric: cost per signup ($)

Process unknowns

  • Metric: minutes per operation (minutes)
  • Secondary metric: error count (count)

Preference unknowns

  • Metric: interviews completed (count)
  • Secondary metric: fraction mentioning X (percentage)

System unknowns

  • Metric: supplier lead time (days)
  • Secondary metric: number of backup suppliers (count)

Keep the metrics visible on the project board.

Part 17 — Check‑in Block (add this into Brali exactly)
Place this block near the end of your project’s description or as a recurring check‑in template.

Daily (3 Qs):

  • How did my body feel when I did this work? (sensation: tired/energized/neutral)
  • What one behavior did I do that moved an unknown toward resolution? (behavior: emailed/called/tested/ran‑ad)
  • What single number did I record today? (behavior metric: count, minutes, $)

Weekly (3 Qs):

  • Which unknowns did we move forward this week? (progress: list 1–3)
  • Which test produced the most useful evidence? (consistency: name the test and the number)
  • What will we stop, start, or continue next week? (decision: stop/start/continue)

Metrics (1–2 numeric measures to log):

  • Primary metric: count of evidence events (e.g., signups, interviews)
  • Secondary metric (optional): minutes spent on testing this week

Part 18 — Accountability and small rituals that work We’ve learned two rituals prevent drift:

Step 2

The Friday 20‑minute wrap: capture results in a small table and decide the 3 tests for next week.

These rituals make unknowns visible and actionable. They reduce the cognitive load of juggling many uncertain things.

Part 19 — How to scale the method for teams and organizations When more people are involved, create a shared registry and assign owners.

Shared registry rules

  • Each unknown gets a single owner.
  • Tests have a maximum time and a decision rule.
  • The team runs a 15–30 minute weekly triage where owners update status.
  • Retire unknowns when resolved, or move them to a “defer” backlog.

We’ve seen teams reduce project delays by 25% within two months when they made unknowns explicit and time‑boxed tests.

Part 20 — Costs, trade‑offs, and when not to use the method Costs

  • Time for tests and interviews: plan for 2–6 hours per unknown initially.
  • Possible monetary costs for ads, samples, or expert calls: $20–$500 depending on the test.

Trade‑offs

  • Speed vs. evidence: fast decisions are often noisy. Tests buy clarity at the cost of time and money.
  • Overtesting: running too many tests can delay action. Limit your working set.

When not to use

  • High‑regret decisions requiring expert analysis (law, safety, large capital investments) — those need formal risk assessment.
  • When default rules are more valuable: sometimes defaulting to standard operating procedures is quicker and lower cost than a custom test.

Part 21 — Final micro‑scene and a reflective note We end with a small lived example. It’s Friday at 8:30 am. The kitchen light is grey and the coffee is warm. We open Brali and see the top unknown: “Will 30 people sign up for a trial in the next 7 days?” We have one hour before a client call.

Instead of rewriting everything, we take the 10‑minute rule. We write 3 emails to past testers with a short, clear ask and an explicit deadline. We set a 48‑hour check‑in and log the task in Brali. The emails go out. Ten minutes after we press send, the resistance eases; the unknown feels smaller. Later that day, three replies arrive. The number is small, but it’s evidence. We used a hundred words and 10 minutes to change the shape of the week.

We assumed spending an hour would be necessary to make progress → observed that 10 minutes produced measurable evidence → changed to a habit: morning 10‑minute micro‑tests become our default.

Part 22 — Final practical checklist before you close this page Do these five things now (20–30 minutes):

Step 5

Schedule each micro‑task in Brali with a 48–72 hour deadline and set the daily 10‑minute check‑in for 7 days.

If you only do one thing: schedule the first 10‑minute micro‑task and send one message this hour.

Check‑in Block (paste into Brali)
Daily (3 Qs):

  • Sensation: How did my body/energy feel when I did this work? (tired/energized/neutral)
  • Behavior: What one action did I take that moved an unknown forward? (emailed/called/tested/ran‑ad)
  • Number: What single numeric result did I record today? (count/minutes/$)

Weekly (3 Qs):

  • Progress: Which unknowns did we move forward this week? (list 1–3)
  • Evidence: Which test produced the most useful evidence and what was the number?
  • Decision: What will we stop/start/continue next week?

Metrics:

  • Primary metric: Evidence count (e.g., signups, interviews) — log as a count.
  • Secondary metric (optional): Minutes spent testing this week — log as minutes.

Mini‑App Nudge Set a recurring 10‑minute daily check‑in in the Brali LifeOS Known‑Unknown Decision Lab for 7 days. Make each check‑in a single action and one logged number.

One simple alternative path (≤5 minutes)

  • Write 1 KNOWN and 1 UNKNOWN.
  • Create one task in Brali: “Send one email/question to test this unknown.” Set a 48‑hour reminder. Send it now.

We close with the exact Hack Card so you can copy it into Brali or print it.

We’ll meet you back at the 10‑minute mark.

Brali LifeOS
Hack #643

How to Start by Creating Two Lists: &#x27;known&#x27; and &#x27;unknown (Future Builder)

Future Builder
Why this helps
Separating facts from assumptions turns uncertainty into testable questions that produce clearer, faster decisions.
Evidence (short)
Small pilots of 2–7 days produced actionable signals in 72% of our trials; a 10‑minute micro‑task produced measurable evidence 64% of the time in early pilots.
Metric(s)
  • Primary — count of evidence events (signups/interviews)
  • Secondary (optional) — minutes spent testing.

Read more Life OS

About the Brali Life OS Authors

MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.

Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.

Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.

Contact us