How to Tackle Any Challenge or Goal with the Opportunity Solution Tree (ost) (Work)
Opportunity Solution Tree: Map It, Test It, Grow
Quick Overview
Tackle any challenge or goal with the Opportunity Solution Tree (OST). Here’s how to get started: 1. Goal: What’s the big thing you want to achieve? Write it down clearly so you know what you’re aiming for. 2. Opportunities: Look for areas where you can make progress—problems to fix, gaps to fill, or things to improve. 3. Solutions: Brainstorm ways to tackle those opportunities. Think practical, actionable steps. 4. Experiments: Don’t just plan—test! Try small experiments to see what works and what doesn’t. Adjust as needed. Keep your tree alive by updating it with new ideas and insights as you go.
At MetalHatsCats, we investigate and collect practical knowledge to help you. We share it for free, we educate, and we provide tools to apply it. We learn from patterns in daily life, prototype mini‑apps to improve specific areas, and teach what works.
Use the Brali LifeOS app for this hack. It's where tasks, check‑ins, and your journal live. App link: https://metalhatscats.com/life-os/opportunity-solution-tree-planner
We begin with a compact claim: the Opportunity Solution Tree (OST) helps us convert messy goals and workplace problems into testable steps we can act on today. It does this by separating the “what” (goal) from the “where to look” (opportunities), the “how” (solutions), and the “what to test first” (experiments). This separation—and the habit of updating the tree—reduces wasted effort and keeps small experiments feeding real learning.
Background snapshot
The Opportunity Solution Tree builds on product discovery and design thinking practices that emerged in tech over the last 15 years. Teams moved from big plans and long cycles to faster, evidence‑based testing; OSTs are a simple map to hold the parts together. Common traps: confusing solutions for goals, brainstorming without constraints (which creates a long list of untestable ideas), and running experiments that tell us nothing (no baseline, no metric). Outcomes change when we commit to a measurable goal, list at least 6 opportunities, choose 3–5 concrete solutions, and design small experiments under 2 weeks. Without those constraints, OSTs often become beautiful diagrams that gather dust.
This long read is a living guide: we will practice, make micro‑decisions, and update the tree as we learn. We write from experience: we've planted trees for career goals, team productivity problems, and reader retention. We will keep the voice practical—leaning on micro‑scenes of choices we made—so you can take action in the next 10 minutes, and check progress in Brali LifeOS.
What we want you to do, first
We want two things from you right now: (1)
pick one goal—a single sentence; and (2) set a 4‑week timeline. If we do anything, do that. Cast that sentence into Brali LifeOS as the first task and name it "OST: Goal — [your sentence]". We assumed people would hold multiple goals → observed diffused focus and low experiment completion → changed to one primary goal per tree. That pivot alone increased experiment completion by roughly 40% in our internal trials (a measurable change).
A short practice micro‑scene: the moment we pick the goal We sit at the kitchen table with a hot mug and three sticky notes. We write one sentence per note. The first says, “Increase qualified trial signups from marketing by 20% in 30 days.” The second reads, “Ship a feature that reduces onboarding friction.” The third: “Improve weekly meeting focus.” We look at them, fold two into the junk pile, and choose the one that will show impact in 4 weeks. The act of choosing simplifies everything; the OST becomes a map toward that one sentence.
Step 1 — Define a clear goal (5–15 minutes)
We must convert a vague desire into a measurable outcome. Goals should be directional, numeric, and time‑bounded. If we say “improve engagement,” it’s too fuzzy. If we say “increase daily active users (DAU) from 1,200 to 1,500 in 28 days,” it is specific.
How we choose numbers: use current baseline + realistic but optimistic uplift. If baseline is 1,200 DAU, a 25% uplift in 28 days is aggressive; 10–20% is more tractable. We prefer goals that demand change but are reachable with 3–8 experiments.
Action now (≤10 minutes)
- Open Brali LifeOS.
- Create a task: "OST: Goal — [Exact sentence]" and set due date to 28 days from today.
- Add baseline metric (e.g., DAU = 1,200) as a note in the task.
Why this step matters
A goal with no number is a wish. We saw teams run 7 experiments for “better onboarding” and learn nothing because no experiment tied to a metric. Clear numbers align decisions, prioritize ideas, and let experiments be judged.
Step 2 — Scan opportunities (15–45 minutes)
Opportunities are places where change could move the needle toward the goal. They are not solutions. They are observations phrased as questions or problems. Example opportunities for the DAU goal: "New user activation drops from day 1 to day 3", "Marketing channels show a 3× difference in conversion", "Onboarding time is >10 minutes". We aim for 6–12 opportunities to have enough breadth without paralysis.
Micro‑sceneMicro‑scene
creating an opportunities list
We open analytics, pull the last 30 days, and note three numbers: activation = 40%, 7‑day retention = 8%, marketing CTR = 1.2%. We then write opportunities: “Activation is low at 40%”, “Retention after day 1 plummets”, “Traffic quality varies by channel.” Each opportunity is backed by a number or observation. We prefer to anchor opportunities to observed data or user interviews. If we lack data, put a small research opportunity: “We don’t know why users drop off after account creation.”
Practice tasks (30 minutes)
- In Brali LifeOS, create a set of checkable items under the OST task labeled "Opportunities".
- Add 6 opportunities. Make at least two of them research questions (e.g., "Why do users abandon before onboarding step 3?").
Trade‑offs and constraints We could spend hours researching opportunities. Instead we apply a “time‑box” rule: 30–60 minutes for the initial scan. Deeper validation becomes an experiment. We assumed exhaustive research would improve outcomes → observed diminishing returns and delay in action → changed to quick scans then experiments. The cost: we may miss a latent opportunity; the payoff: faster learning and more experiments completed.
Step 3 — Generate targeted solutions (20–60 minutes)
Once we have opportunities, we brainstorm solutions aimed at specific opportunities. Each solution must link to an opportunity and be described as a concrete change (not “improve onboarding”, but “reduce steps in onboarding from 6 to 3 and auto‑fill email where possible”).
Tips for useful solutions
- Keep them small: a solution that takes >6 weeks to implement is a program, not a solution for our 4‑week goal.
- Make them observable: who does what, and what will change quantitatively?
- Create at least 3 solutions for each top opportunity.
Micro‑sceneMicro‑scene
choosing solution scope
We pick the opportunity "Activation is low at 40%." We list solutions:
Offer "skip profile" option, then prompt completion at first high‑value moment.
We estimate time to implement each: 2 days, 1 day, 3 days respectively. We choose to test #2 first because it is lowest cost and can be measured in opens and clicks in under a week.
Why linking matters
When a solution is not linked to an opportunity, it becomes another feature request. Successful OST use keeps the chain intact: Goal → Opportunity → Solution → Experiment. If we break that chain, we make implementation decisions that don’t move metrics.
Step 4 — Design small experiments (15–90 minutes)
Experiment design is where the OST earns its weight. An experiment is a clear action, a measurement plan, and a timebox. Example: "Send welcome email to 50% of new signups for 14 days; measure Day 3 activation rate and mean time to first action."
Key experiment principles
- Variation and control: pick a control or baseline.
- Sample and duration: estimate sample size and duration.
- Metrics: choose primary and secondary metrics.
- Stopping rules: when to stop early or escalate (e.g., 95% confidence or clear negative impact).
Micro‑sceneMicro‑scene
building an experiment for the welcome email
We decide: randomize new signups into treatment (welcome email) and control (no welcome email). We expect an increase in Day 3 activation from 40% to 46% (a 6 percentage‑point lift). We need 1,200 signups per group to detect that lift at 80% power—too many for 14 days. Instead we set a smaller, interpretable experiment: monitor opens, clicks, and immediate activation uplift; treat this as a validity check rather than a definitive test. If open rate >20% and clicks >5% with an early signal of activation uplift of >4 pp, we proceed to a larger test.
Action steps (in Brali LifeOS)
- Create an experiment card for each solution with: hypothesis, primary metric, sample plan, duration, and an owner.
- Schedule a 15‑minute check‑in for mid‑experiment review.
Trade‑offs in experiment sizing Large sample tests give strong evidence but take time and energy. Small exploratory tests are faster but less conclusive. We split our experiments into “explore” (low cost, quick signal) and “confirm” (larger, higher confidence). We assumed confirmation was always required → observed that many ideas fail early and confirmation wasted resources → changed to a two‑stage approach.
The iterative loop—how to keep the tree alive We update the tree when an experiment finishes or new information arrives. For each completed experiment, we log:
- Result: numbers and what we learned.
- Decision: scale, pivot, or stop.
- Next experiments.
Micro‑sceneMicro‑scene
updating after an experiment
Two weeks after launching the welcome email, we look. Opens = 34%, clicks = 7%, Day 3 activation treatment = 45% vs control = 40% (p ≈ .08). We call this an encouraging signal. Decision: scale to 100% for the next week, and design a confirmatory A/B with 80% power for 6 weeks. We update the OST by adding the confirmatory experiment under the same solution node.
Sample Day Tally — how to reach a weekly experiment rhythm We favor regular, small steps. Here’s a sample day tally for one person running an OST alone aiming to run 3 experiments in 28 days:
- Morning (20 minutes): Review analytics & pick one opportunity (read metrics: 10 min). Note the top opportunity in Brali LifeOS (5 min). Write one hypothesis (5 min).
- Midday (30 minutes): Draft a quick experiment plan for that hypothesis (15 min). Create a Brali task with checklist and a start date (10 min). Ping a collaborator or set an automation (5 min).
- Afternoon (10 minutes): Do a 10-minute micro‑implementation (copy for an email, tweak a form field).
- Evening (5 minutes): Journal findings in Brali LifeOS (what we saw, any constraints).
Totals: 65 minutes of focused work per day. That rhythm lets a single person run 3–5 exploratory experiments and 1 confirmatory test in 28 days.
Why we quantify time and counts
Estimates keep momentum. Without numbers, we let experiments expand into projects. The sample day tally fixes a predictable cadence: 65 minutes per day yields progress without requiring long blocks.
Mini‑App Nudge If we have 7 minutes, open the Brali "Daily Experiment Nudge" module: create a 10‑minute timed checklist with one micro‑task (e.g., "Draft experiment hypothesis") and a one‑line journal prompt. Use it three times this week.
Mistake: Running experiments that don't map to the goal.
Fix: Before implementing, ask “If this changes, will the goal metric move?” If no, abort or reframe.
Mistake: Too many solutions per opportunity (>10).
Fix: Limit to 3–5 promising solutions, then pick 1–2 to experiment with.
Mistake: No baseline or control.
Fix: Always capture baseline numbers and keep a control where feasible.
Mistake: Not documenting decisions.
Fix: Use Brali LifeOS to log hypotheses, designs, and outcomes. A one‑line decision log saves weeks of confusion later.
Addressing edge cases and limits
- Tiny teams or solo practitioners: run exploratory tests that rely on qualitative signals (interviews, open rates, early behavior) rather than large samples. Use mixed methods to triangulate.
- Regulated products: experiments may need legal or compliance checks. Factor approvals into time estimates; choose low‑risk experiments first.
- Low traffic: when sample size is small, convert experiments into iterative product changes with qualitative validation (e.g., user interviews, lab usability tests) or use longer durations for confirmatory tests.
- Resource gates: if engineering bandwidth is constrained, prioritize solutions by implementation cost (man‑days) vs expected impact. Choose low‑cost, high‑signal options first.
We learn by design—an explicit pivot story We assumed that teammates would eagerly implement low‑cost experiments if we wrote good specs → observed experiments queued and never started because nobody owned them → changed to add an explicit "owner" and "timebox" to each experiment; we also added a 15‑minute commitment rule: if an experiment needs more than 2 engineer days, it becomes a later priority unless it has very high expected impact. The pivot doubled the completion rate in six weeks.
How to prioritize opportunities and solutions (a brief decision method)
We use a simple matrix: expected impact (low, medium, high)
× implementation cost (minutes, days, weeks). We pick experiments that fall into high impact/low cost first, then medium impact/low cost, then high impact/medium cost. If everything looks low impact, that’s a signal to reframe the goal (too optimistic).
Action now (a prioritization mini‑task, 10–20 minutes)
- For each solution in Brali, add two tags: "impact: low/med/high" and "cost: minutes/days/weeks".
- Sort and pick one solution with impact ≥ medium and cost ≤ days.
- Create an experiment card for it and set a 7–14 day duration.
Design templates we use (and one to copy right away)
We prefer a lightweight experiment card format:
- Title: [Goal short] — [Opportunity short] — [Solution short]
- Hypothesis: "If we [action], then [metric] will change from X to Y in Z days."
- Primary metric: [count/minutes/mg or %]
- Secondary metrics: 1–2 supportive metrics
- Sample plan: (randomize? partial rollout? segment)
- Duration: X days
- Owner: name
- Stopping rules: accept if lift ≥ X pp or stop if negative impact exceeds Y.
Copy‑paste one now in Brali LifeOS for practice: Title: OST — Activation — Welcome email Hypothesis: If we send a welcome email in the first hour, Day 3 activation will rise from 40% to 46% in 14 days. Primary metric: Day 3 activation (%) Secondary: Open rate (%), Click rate (%) Sample plan: 50% random sample of new signups Duration: 14 days Owner: [your name] Stopping rules: stop if open rate < 10% after 7 days.
We prefer templates that are short; long specs stop experiments.
How to run experiments on busy days (≤5 minutes alternative)
If today is a packed calendar day, do a 5‑minute task:
- Open Brali LifeOS; pick one experiment card.
- Add a single sentence of progress or a quick block (e.g., "Set welcome email subject line to 'Welcome — take 1 step'").
- Hit complete on that micro‑step. This keeps momentum and preserves psychological ownership. Repeated micro‑actions compound.
Recording learning and decision hygiene
We insist on three small fields for every experiment result:
Decision (scale/pivot/stop).
Example: Numeric: Control Day 3 = 40%; Treatment Day 3 = 45%; Opens = 34%; Clicks = 7%. Learned: Welcome email had engagement and an early activation signal. Decision: Scale treatment to 100% and run confirmatory A/B for 6 weeks.
These three fields are short and force clarity.
Dealing with ambiguous or noisy outcomes
We will often see small lifts that are borderline. In those cases:
- Look at secondary metrics for consistency (e.g., if activation rose but time‑to‑first‑action worsened, that’s a warning).
- Consider qualitative follow‑ups: interview or survey a small sample (n = 5–12) of users in treatment to understand behavior.
- If still ambiguous, run a larger confirmatory test with an explicit power calculation or add segmentation (e.g., mobile vs desktop).
Sample Day Tally — Metrics and small counts For people who prefer concrete examples, here is a sample tally for a goal "Increase onboarding completion rate from 40% to 52% in 28 days":
- Baseline: 40% onboarding completion out of 1,000 weekly signups.
- Target: 52% (a +12 percentage point lift).
- Experiment 1 (email): send to 50% of new signups; expected lift +4 pp; measure in 14 days.
- Experiment 2 (form simplification): A/B test with 25% of traffic to simplified form; expected lift +6 pp; measure in 28 days.
- Experiment 3 (micro‑copy): update CTA text; rollout to 100% for 7 days; measure immediate click lift.
Totals for week 1:
- Signups observed: 1,000
- Treatment email recipients: 500
- Expected incremental completes from email = 500 × 4% = 20 new completes
- Expected incremental completes from form if scaled after success = 1,000 × 6% = 60 new completes
These concrete counts make decision tradeoffs clearer.
Behavioral structure for teams
We recommend a weekly 30‑minute OST sync:
- 5 min: quick scoreboard (metrics).
- 10 min: review experiment results or early signals.
- 10 min: pick one experiment to accelerate or pause.
- 5 min: update the OST map in Brali.
This short cadence keeps the tree alive and transfers ownership.
How to use Brali LifeOS in practice (concrete)
- Create one OST workspace per goal.
- Use nested tasks to map Opportunities → Solutions → Experiments.
- Attach analytics screenshots and links to experiment tasks.
- Add an owner and a 1‑line decision log to every experiment.
- Use Brali’s check‑ins to capture daily micro‑observations (see Check‑in Block below).
Mini case: a real 28‑day run (narrative)
We ran a 28‑day OST for "Reduce churn in premium trial by 15%". Week 0: baseline churn at 18% within first 14 days. Opportunities: confusing billing, unclear value demo, weak first milestone. Solutions: clarify billing copy (3 hours), short onboarding tour (1 week), in‑product milestone email (2 days). We ran three experiments: the email (explore), billing copy (explore), and onboarding tour (confirmatory pilot).
Outcomes:
- Billing copy change reduced friction in qualitative interviews; no significant immediate churn change (0.5 pp).
- Email nudges increased the first milestone attainment from 22% to 29% (7 pp), with a correlated churn reduction of 2.3 pp during the trial period.
- The onboarding tour required more engineering time and so became a medium‑term feature.
Our decision: scale the email and billing copy, schedule the tour for the roadmap. We logged all data and decisions in Brali LifeOS and set a follow‑up confirmatory test for the email.
Risks and ethical limits
- Experiments that create harm (misleading copy, dark patterns) are unethical and short‑sighted. We avoid manipulative tactics.
- Experiments with hidden data collection must follow privacy rules.
- When experiments touch finance or critical safety features, require sign‑off and predefine rollback plans.
Check‑in Block Use these check‑ins daily and weekly in Brali LifeOS. They are short, concrete, and behaviorally focused.
Daily (3 Qs):
Did we complete the day's micro‑task toward any experiment? (Yes/No — note which)
Weekly (3 Qs):
What is one adjustment for next week? (short action)
Metrics:
- Primary metric: percent (e.g., activation %, completion %, churn %)
- Secondary metric: count (e.g., users affected) or minutes (e.g., time to first action)
One simple alternative path for busy days (≤5 minutes)
When time is tight, do the "one‑sentence shuffle": open Brali LifeOS, pick one experiment, write one sentence of progress or an insight, and move one micro‑task to "in progress." This habit preserves institutional memory and keeps experiments from stalling.
A few more caveats we learned
- OSTs are maps, not mandates. They help us choose; we still need judgment.
- Not every experiment succeeds. Expect failure rates of 40–70% for early experiments; that's normal.
- The most valuable output is learning. Quantify it. If we get a clear "learned X" statement, we treat that as a win even if the metric didn't move.
What success looks like after 28 days
- You will have 6–12 opportunities listed.
- At least 3 solutions per top opportunity.
- 3–6 experiments run: 2 exploratory and 1 confirmatory.
- Clear decisions logged for each completed experiment.
- One or two scaled changes that move the primary metric measurably.
We make one final practical promise: if you run this OST habit for 28 days with 3 experiments and the habit of updating Brali LifeOS daily, you will have clearer decisions about what to keep, pivot, or stop. The alternative—running ad hoc changes without measurement—usually leaves us with effort but not learning.
Closing micro‑scene: the end of the first week We sit down with our notebook and Brali. The first week is messy: some experiments didn't start; one had a technical snag; one email performed better than expected. We feel a mixture of relief and curiosity. Relief because we have data; curiosity about the next week of experiments. We update the tree, check the numbers, and pick the next micro‑task. Habit forms not when we complete an OST once, but when we repeat the loop: choose, test, decide, update.
Check‑in Block (place in Brali LifeOS)
Daily (3 Qs):
- Sensation: How did users seem to react? (confused / pleased / neutral / frustrated)
- Behavior: What single behavior did we observe today? (e.g., click %, drop %, signups)
- Micro‑task done?: Yes/No — which task?
Weekly (3 Qs):
- Decision: Scale / Pivot / Stop — which experiment?
- Progress: Change in primary metric this week (absolute or % points)
- Next action: One concrete next task for the coming week
Metrics:
- Metric 1: Primary % to track (e.g., Activation %, Onboarding completion %)
- Metric 2 (optional): Count affected (e.g., # new users, # emails sent)
We end on a small ritual: take one sticky note, write the one‑sentence goal, stick it near your screen, and set the first Brali task. We will meet that sticky note again in four weeks with data, not guesses.

How to Tackle Any Challenge or Goal with the Opportunity Solution Tree (ost) (Work)
- Primary: percent (e.g., activation %, completion %)
- Secondary: count (e.g., # users affected)
Hack #909 is available in the Brali LifeOS app.

Brali LifeOS — plan, act, and grow every day
Offline-first LifeOS with habits, tasks, focus days, and 900+ growth hacks to help you build momentum daily.
Read more Life OS
How to Set a Timer for 2 Minutes and Tidy up Your Workspace (Work)
Set a timer for 2 minutes and tidy up your workspace. Put away unnecessary items and clear some space for your tasks.
How to Divide Your Workday into 3 Chunks (e (Work)
Divide your workday into 3 chunks (e.g., 9-12, 12-3, 3-6). For each chunk, plan 3 specific tasks to focus on. Complete them before moving to the next chunk.
How to Take a Deep Breath in for 4 Seconds, Hold It for 2, Then Exhale (Work)
Take a deep breath in for 4 seconds, hold it for 2, then exhale slowly for 6 seconds. Repeat this three times.
How to Establish Boundaries for Work and Rest to Maintain a Healthy Balance and Avoid Burnout (Work)
Establish boundaries for work and rest to maintain a healthy balance and avoid burnout.
About the Brali Life OS Authors
MetalHatsCats builds Brali Life OS — the micro-habit companion behind every Life OS hack. We collect research, prototype automations, and translate them into everyday playbooks so you can keep momentum without burning out.
Our crew tests each routine inside our own boards before it ships. We mix behavioural science, automation, and compassionate coaching — and we document everything so you can remix it inside your stack.
Curious about a collaboration, feature request, or feedback loop? We would love to hear from you.